What contemporary software innovations, patterns, methodologies or practices are eligible to be considered ahead of their time?
views:
970answers:
21Source to Source program transformations.
These have been known technically since the early 70s (starting with the Irvine Program Transformation catalog) and simply have not made it into wide use, in spite of the extremely useful ability of carrying out automated changes to code.
So, contemporary since the 70s, and ahead of their time ever since.
Something related to sixth sense
http://www.youtube.com/watch?v=mzKmGTVmqJs
Smalltalk. Both java and c# are gradually moving towards becoming a second-rate smalltalk, 30 years later. Ruby is a slow smalltalk with an ugly syntax :)
By far, the greatest innovation now is database. Clipper, which was one of the first object oriented programming languages that dealt specifically with databases, was way ahead of its time. Back then, no one saw the powerful potental of a database. Now, almost every program out there uses some sort of database, especially since memory has become cheaper and drive storage has almost become infinite, compared to the 80's, when clipped came out. Even file systems are being created as databases.
Visual programming (programming without code)
For some reason, most programmers seem violently opposed to this. I don't know why; but I have no doubt visual programming is the way of the future.
Of course, it relies entirely on the tools. We are a long way from the ideal tools, but there are currently a few good strides:
- SQL Server Integration Services is a completely visual designer/code-generator which can accomplish some pretty complex tasks without a line of manual code entry. I've been using it a while now; many of the things I used to spend an hour writing a quick hacked-together script for I now do in a few minutes using SSIS.
Making a script multi-threaded is literally as easy as making it procedural. - Multimedia Fusion 2 is a great completely-visual programming tool, but they are targeting the wrong audience - game-designers rather than serious business applications. If Visual Studio had half the visual-features MF2 has, the amount of code we'd have to write could be cut down to 10% what it is now.
- Code Bubbles is a visual IDE. There's a big difference between "organizing code visually" and "organizing program flow visually," but it's an important first step.
- Microsoft Robotics Visual Programming Language does a decent job of visualizing programming, but is currently way too niche to be useful to general software developers, and feels more like displaying code with pictures rather than a whole new paradigm of coding (like SSIS and MF2).
There is a more complete list here.
Continuing along the 70's theme ;-), I vote for concurrency. A lot of concurrency concepts came from research in the 70's.
This might seem weird, but I'm going to say Threads/Multithreading.
Let me explain.
Threads have been in operating systems and system libraries for years, but before the relatively recent advent of multi-core computers for the masses, actual concurrently executing threads were a thing of kernel developers, driver writers, academia or supercomputers. Therefore, the ability to create multiple threads of execution was ahead of the time of actually having concurrent threads of execution.
In my opinion, a lot of existing multi-threaded code wasn't necessarily written/shipped in a thread-safe manner because no two threads could actually ever be executing simultaneously. The kernel scheduler for multi-processor-aware OSes would often choose to keep all threads of a process on one particular processor too, so multi-threading issues whereby data you read a moment ago has changed underneath you maybe only just showing up. Rules about 'only one UI thread' enforced by e.g. .NET Forms, may go some way to limit any damage concurrently-executing backgrounds threads may do, but I wonder if there's some multi-core bombs yet to go off.
Many of us have seen an amount of 'thread abuse' where developers' fire off a thread for each discrete task that needs performing, even if the majority of the time that task is 'wait for X', or an asynchronous/deferred procedure call is actually achieved with a low-priority thread.
Thread Pools were created to help with this, but these were opt-in; the .NET Framework set a good example by pooling all threads exposed via threading APIs; Mac OS X Snow Leopard was clearly advertised as having vastly reduced numbers of threads compared to the previous version and thus increased performance due to better use of system resources.
People are now getting excited about libraries and frameworks that leverage multi-core CPUs to do some tasks faster by executing in parallel concurrently on multiple cores, e.g. Parallel LINQ. Of course, there's some syntax sugar to make it all much easier, but multithreading was there years ago.
Integrated Development Environment
Today IDEs like eclipse, visualC++ provide terrific functionality like refactoring, code completion, automatic syntax correction, incremental compilation, code swapping in debug mode,...
10 years ago these IDEs were almost unusable, because too slow for a correct user experience. We had to wait for Moore's law to apply, and once the computers get powerful enough, the IDE becomes valuable and wipes out usual text editor. Who is using emacs to develop Java applications today ?
I think the IDE concept is not done yet. There is plenty of room for improvement, for example in modelisation and automatic generation of code (or unit test). I'm excited to see what an IDE will be able to do in 10 years from now...
Functional Programming. The concepts may have been around for awhile but they are ahead of their time even today. We would have a hard time keeping up with Moore's Law if we didn't have ways of taking advantage of multi-core CPUs.
I would say that Parallel computing would be contemporary as well as ahead of times. Parallel programming (multi-processor based) is due to see its day in future with multi-processor becoming more and more common. These concepts existed in past as well (in supercomputers) and will now start to come in desktops. Microsoft has now introduced Parallel extensions in .Net.
Maglev - imagine your ruby application running across multiple machines in a cluster. Where you don't have to write sql - you save, load and query transactionally just with your objects. (Repackaged Smalltalk technology known for at least the last decade as "gemstone"...)
Smalltalk - Commercially released in 1984 (I think). Even today, still ahead of Java and C#, although it took a few decades to become fast. It had the first refactoring engine, IDE and multiplatform portability (like Java). See http://stackoverflow.com/questions/1821266/what-is-so-special-about-smalltalk/1821686#1821686 for further details. People just assume it's a language when in fact it says a lot more about design, UI environments and operating systems than just being a language.
Open Croquet http://www.opencroquet.org - A Squeak Smalltalk-based 3D environment which lets multiple users interact and program the environment from inside itself. It has it's own object replication protocol for sharing environments efficiently and scaleably over the internet. It's difficult to describe because there just isn't anything else remotely like it... which strongly suggests it's way ahead of its time. I did try to describe it though: http://itwales.com/999105.htm
Here's the talk Alan Kay gave on Croquet: http://www.lisarein.com/alankay/tour.html
Erlang - language designed to encourage as much concurrency as possible through messaging. Takes the traditional view of a single, consistent state of an application domain moving forward one step at a time and smashes it up completely...
Erlang takes the view that concurrency today is hard because we have the wrong constructs. Having the right constructs in your language and runtime should make it much easier to scale systems with concurrency.
This kind of technology is going to be increasingly needed when we move to a world of tens or hundreds of cores per server and enterprise Java just proves to difficult to write at scale with threads...
Here's a video demoing Erlang: http://www.youtube.com/watch?v=uKfKtXYLG78
Historical Debugging
I would say historical debuggers are ahead of their time right now. Visual Studio 2010 Ultimate has one, and there are a couple open source ones which I have read about. I think 10 years from now, back-stepping in a debugger will be common practice.
Lisp
The Lisp programming language was originally created in 1958. The original design featured a powerful, yet simple syntax based on the S-expression which is a notation for an operation on a list, such as (op A B C ...)
. From this simple syntax, many powerful features were able to be added to the language.
One important aspect of Lisp's notation is that Lisp source code is structured as a list. This, effectively, allows Lisp programs to easily process Lisp code. Functions that accept Lisp code as input and return code as output are known as macros. Today, this capability is often used as a way to develop domain specific languages.
Beyond this one capability, one can typically find that "advanced" features in modern programming languages often have pre-existing equivalents in Lisp and/or the popular Lisp interpreters and compilers.
Continuous Integration.
Some of us have been cajoling others into using in for years (decades?), and yet I'm still introducing people to the concept today.
Thin clients. In 1990's the network wasn't just fast enough. Now with fast internet access, cloud computing and portable hardware, they made impressive come back.
The Semantic Web. It's almost a decade now, and yet little has been done to actually embrace that technology. Only now social networks are starting to use that for discovery protocols, but it's still a very small subset of semantic web.
I'd definitely vote for TeX
and even more for METAFONT
. The "hackish" nature of the tools, their mathematical sophistication and, in METAFONT
s case, inability to produce output that can be consumed directly by contemporary programs hampered their adoption.
I would suggest that Larry Page's Page Rank algorithm (and pretty much everything in it's implementation) would be up there. Now it seems logical and intuitive, but way back in 1996/98 it would have been very insightful and miles ahead of what everyone else was thinking about.
I would say "Software Transactional Memory" or even better "Hardware Transactional Memory"
We are saying currently that Disk is the new Tape, and in the same view you can now say that RAM is the new Disk.
It can take 100's of time slower to get data from RAM than from register or the core caches. Caching is needed to keep the CPU's clocking.
Transaction based memory operations could help the CPU's and the compilers to optimize the pipelines/multithreading to keep data consistent, CPU's not waiting for data from RAM or L1 cache, etc...
Declarative UIs
Maybe it's just my bias as a tester, but I like declarative UIs. Everybody knows HTML+CSS, but I like seeing WPF and XUL too. User interfaces defined in a declarative language instead of procedurally generated are just nicer to work with when testing. I look forward to seeing more desktop UIs written in a declarative language.