views:

970

answers:

21

What contemporary software innovations, patterns, methodologies or practices are eligible to be considered ahead of their time?

+6  A: 

Source to Source program transformations.

These have been known technically since the early 70s (starting with the Irvine Program Transformation catalog) and simply have not made it into wide use, in spite of the extremely useful ability of carrying out automated changes to code.

So, contemporary since the 70s, and ahead of their time ever since.

Ira Baxter
BTW, how many of us would consider technology known in the 70ies "contemporary"? (FTM, How many here can even _remember_ the 70ies?)
sbi
Understood. Transformations does sound almost to good to be true - I'll dig a little deeper. I'm going to delete my comments, since they're just noise. I hope you get the answer you're looking for.
mdma
Some of us were building software in the 70s and still remember it.
Ira Baxter
@sbi How many of us were born
PeteT
+1  A: 

Something related to sixth sense
http://www.youtube.com/watch?v=mzKmGTVmqJs

zapping
+2  A: 

Smalltalk. Both java and c# are gradually moving towards becoming a second-rate smalltalk, 30 years later. Ruby is a slow smalltalk with an ugly syntax :)

Stephan Eggermont
Smalltalk came out in 1971. I'm not arguing that it wasn't ahead of its time, but I don't think it counts as contemporary anymore.Also, if you think Ruby has ugly syntax, you should check out Groovy sometime.
CaptainAwesomePants
Well, the standard is from 80. And the 71 version was a lot different. And having the best web framework (Seaside) on the market, it is very much a contemporary language
Stephan Eggermont
... an ugly Smalltalk, without the self-bootstrapping and with quite a few limitations the original didn't have... ;-)
cartoonfox
I don't think Java and C# are moving towards Smalltalk. I'd say they're diverging from it. They're more interested in having super-elaborate type systems than behaving like a dynamic system that's not fragile enough to need to worry about that kind of "consequence management".
cartoonfox
A: 

By far, the greatest innovation now is database. Clipper, which was one of the first object oriented programming languages that dealt specifically with databases, was way ahead of its time. Back then, no one saw the powerful potental of a database. Now, almost every program out there uses some sort of database, especially since memory has become cheaper and drive storage has almost become infinite, compared to the 80's, when clipped came out. Even file systems are being created as databases.

icemanind
+2  A: 

Visual programming (programming without code)

For some reason, most programmers seem violently opposed to this. I don't know why; but I have no doubt visual programming is the way of the future.

Of course, it relies entirely on the tools. We are a long way from the ideal tools, but there are currently a few good strides:

  • SQL Server Integration Services is a completely visual designer/code-generator which can accomplish some pretty complex tasks without a line of manual code entry. I've been using it a while now; many of the things I used to spend an hour writing a quick hacked-together script for I now do in a few minutes using SSIS.
    Making a script multi-threaded is literally as easy as making it procedural.
  • Multimedia Fusion 2 is a great completely-visual programming tool, but they are targeting the wrong audience - game-designers rather than serious business applications. If Visual Studio had half the visual-features MF2 has, the amount of code we'd have to write could be cut down to 10% what it is now.
  • Code Bubbles is a visual IDE. There's a big difference between "organizing code visually" and "organizing program flow visually," but it's an important first step.
  • Microsoft Robotics Visual Programming Language does a decent job of visualizing programming, but is currently way too niche to be useful to general software developers, and feels more like displaying code with pictures rather than a whole new paradigm of coding (like SSIS and MF2).

There is a more complete list here.

BlueRaja - Danny Pflughoeft
How 'bout Ladder Logic or Lego Mindstorms while we're discussing controls-related apps? Both are way more popular than Microsoft Robotics.
Chris O
@Chris: Because Ladder Logic is not contemporary, and not particularly good at representing complex logic statements. It is more useful for allowing electrical engineers (with no programming experience) to program for systems which have simple relationships between inputs and outputs. I have no experience with the Lego Mindstorms software.
BlueRaja - Danny Pflughoeft
@BlueRaja: I don't know what you mean by "contemporary". Ladder logic is in heavy use in factories all over the world. It is quite good at representing complex logic, by representing complex boolean equations as data flow. And extremely complex controllers are built using it, all the time. *You* might not be used to it.
Ira Baxter
Ladder Logic isn't programming without code. And in fact, most "visual langauges" are just code with the dataflow made explicit. "Programming" without code means (to me) I write down an abstract goal, and the computer figures out the code. I don't know of any visual langauges that do that.
Ira Baxter
@Ira: I meant that it isn't a contemporary innovation, which is what this question is asking for.
BlueRaja - Danny Pflughoeft
@BlueRaja: OK, agreed, ladder logic isn't a contempory *innovation*.
Ira Baxter
@Ira Baxter: What you are looking for is a declarative programming system or a automated planning system. Both have been around for a while, they need not be visual but they often are in things like logistics and mission planning domains.
Ukko
+1 for mentioning Code Bubbles. A completely unique approach to writing code -- I'd really like to try it out.
Pandincus
@Ukko: Much better example than the ones cited in this answer.
Ira Baxter
Hasn't programming without code been idealized since the'60s?
Paul Nathan
@Paul: Yes, but it's only now beginning to be realized (through contemporary software innovations) :)
BlueRaja - Danny Pflughoeft
@IraBaxter What about executable UML? One of professors at my university made a compiler, but I don't know how mature it is.
AndrejaKo
@AndrejaKo: Have you *looked* at executable UML? Its just a dataflow version of Java. You are still coding procedurally.
Ira Baxter
+4  A: 

Continuing along the 70's theme ;-), I vote for concurrency. A lot of concurrency concepts came from research in the 70's.

Chris O
+9  A: 

This might seem weird, but I'm going to say Threads/Multithreading.

Let me explain.

Threads have been in operating systems and system libraries for years, but before the relatively recent advent of multi-core computers for the masses, actual concurrently executing threads were a thing of kernel developers, driver writers, academia or supercomputers. Therefore, the ability to create multiple threads of execution was ahead of the time of actually having concurrent threads of execution.

In my opinion, a lot of existing multi-threaded code wasn't necessarily written/shipped in a thread-safe manner because no two threads could actually ever be executing simultaneously. The kernel scheduler for multi-processor-aware OSes would often choose to keep all threads of a process on one particular processor too, so multi-threading issues whereby data you read a moment ago has changed underneath you maybe only just showing up. Rules about 'only one UI thread' enforced by e.g. .NET Forms, may go some way to limit any damage concurrently-executing backgrounds threads may do, but I wonder if there's some multi-core bombs yet to go off.

Many of us have seen an amount of 'thread abuse' where developers' fire off a thread for each discrete task that needs performing, even if the majority of the time that task is 'wait for X', or an asynchronous/deferred procedure call is actually achieved with a low-priority thread.

Thread Pools were created to help with this, but these were opt-in; the .NET Framework set a good example by pooling all threads exposed via threading APIs; Mac OS X Snow Leopard was clearly advertised as having vastly reduced numbers of threads compared to the previous version and thus increased performance due to better use of system resources.

People are now getting excited about libraries and frameworks that leverage multi-core CPUs to do some tasks faster by executing in parallel concurrently on multiple cores, e.g. Parallel LINQ. Of course, there's some syntax sugar to make it all much easier, but multithreading was there years ago.

JBRWilkinson
Threads aren't contemporary, and they certainly aren't ahead of their time.All of the concurrency problems we deal with today came about in the 70's. In point of fact I'd say that given that we still struggle with the issues brought up more than 30 years ago, that Threads are **behind the times**. And I say this as someone who spends a deal of time doing concurrency.
Jason
+7  A: 

Integrated Development Environment

Today IDEs like eclipse, visualC++ provide terrific functionality like refactoring, code completion, automatic syntax correction, incremental compilation, code swapping in debug mode,...

10 years ago these IDEs were almost unusable, because too slow for a correct user experience. We had to wait for Moore's law to apply, and once the computers get powerful enough, the IDE becomes valuable and wipes out usual text editor. Who is using emacs to develop Java applications today ?

I think the IDE concept is not done yet. There is plenty of room for improvement, for example in modelisation and automatic generation of code (or unit test). I'm excited to see what an IDE will be able to do in 10 years from now...

kabado
Auto code generation is almost always a precursor to a full-fledged language feature (or a failed project, or both). I suspect the bigger deal in the near term will be integrating the product lifecycle into the IDE - directly tracking how code changes relate to features and bugs, build statuses, code reviews, etc.
Jason
I've used all of the major IDEs. I have yet to find one that does something that Emacs (which is now 30+ years old) can't do, except perhaps visual GUI design. OTOH, they all lack the ability to do many things that Emacs can do.
Paul Legato
What @Paul Legato said, but also for vim. ;-) /me writes non-trivial Java and C# code in vim.
Stobor
IDE vs. emacs: Maybe it's not clear in my answer, but it's rather a question of usability than capabilities! I didn't launch emacs for a while, but I doubt there is the equivalent of Eclipse's "perspective", dockable windows, Mylin, specialized views, form editors, wizards, etc.
kabado
@kabado - you'd be surprised. Emacs is actually a generic Lisp runtime that happens to come with some default code for editing text files, and there are a _lot_ of addon packages for it. Most/all of what you mentioned already exists for Emacs, plus a whole lot more. Check out http://www.emacswiki.org/ for starters. Besides what you mentioned, Emacs also has 50 million other programming usability/assistance modes, a web browser, Tetris, a text adventure game, psychoanalyst, newsgroup reader, email client, shell, day planner/calendar, calculators, IRC client, chess, etc. etc....
Paul Legato
+5  A: 

Functional Programming. The concepts may have been around for awhile but they are ahead of their time even today. We would have a hard time keeping up with Moore's Law if we didn't have ways of taking advantage of multi-core CPUs.

Kelly French
Could you elaborate the relationship between functional programming and multi-core ?
kabado
@kabado: Since functional programming disallows all side effects of functions, the only interaction of a function with its environment is the passing variables in, and the result out of the function. You can call as many functions in parallel as you like and can be sure none will trample over the other's data.
sbi
@sbi: Ok. So you don't mean LISP then :-). I'm a great fan of ML and I enjoy its static binding. Although functional programming syntax (more than other major languages like Java) suggests one flow of control, I agree that function call parameters can be evaluated in // when no side effect is expected.
kabado
+1  A: 

I would say that Parallel computing would be contemporary as well as ahead of times. Parallel programming (multi-processor based) is due to see its day in future with multi-processor becoming more and more common. These concepts existed in past as well (in supercomputers) and will now start to come in desktops. Microsoft has now introduced Parallel extensions in .Net.

Sudesh Sawant
+8  A: 

Maglev - imagine your ruby application running across multiple machines in a cluster. Where you don't have to write sql - you save, load and query transactionally just with your objects. (Repackaged Smalltalk technology known for at least the last decade as "gemstone"...)

Smalltalk - Commercially released in 1984 (I think). Even today, still ahead of Java and C#, although it took a few decades to become fast. It had the first refactoring engine, IDE and multiplatform portability (like Java). See http://stackoverflow.com/questions/1821266/what-is-so-special-about-smalltalk/1821686#1821686 for further details. People just assume it's a language when in fact it says a lot more about design, UI environments and operating systems than just being a language.

Open Croquet http://www.opencroquet.org - A Squeak Smalltalk-based 3D environment which lets multiple users interact and program the environment from inside itself. It has it's own object replication protocol for sharing environments efficiently and scaleably over the internet. It's difficult to describe because there just isn't anything else remotely like it... which strongly suggests it's way ahead of its time. I did try to describe it though: http://itwales.com/999105.htm

Open Croquet - editing Smalltalk code using the "Alice" 3D avatar

Here's the talk Alan Kay gave on Croquet: http://www.lisarein.com/alankay/tour.html

cartoonfox
+6  A: 

Erlang - language designed to encourage as much concurrency as possible through messaging. Takes the traditional view of a single, consistent state of an application domain moving forward one step at a time and smashes it up completely...

Erlang takes the view that concurrency today is hard because we have the wrong constructs. Having the right constructs in your language and runtime should make it much easier to scale systems with concurrency.

This kind of technology is going to be increasingly needed when we move to a world of tens or hundreds of cores per server and enterprise Java just proves to difficult to write at scale with threads...

Here's a video demoing Erlang: http://www.youtube.com/watch?v=uKfKtXYLG78

cartoonfox
In order to manage shared state, you have to have a concrete model of how your data interacts. My impression has been that Erlang's main benefit is that it has a forcing function: it only shares what you *make it* share. At the end of the day it's the exact same sentiment (if not the exact implementation details) of microkernels, and the Law of Demeter. I'm waiting for someone to invent the Middle Way between shared everything and shared nothing.
Jason
This might be of some interest :P : http://www.youtube.com/watch?v=uKfKtXYLG78
Akanksh
You obviously need lots and lots of phones to demo Erlang(!)
cartoonfox
+2  A: 

Historical Debugging
I would say historical debuggers are ahead of their time right now. Visual Studio 2010 Ultimate has one, and there are a couple open source ones which I have read about. I think 10 years from now, back-stepping in a debugger will be common practice.

tster
Those have been around since the late 70s. Debugging in general is about 25-35 years behind the academic research - only the most recent generations of debuggers are incorporating the concepts which flowered in the 70s and 80s. Most of what is available in VS 2003 seems to be circa the late 50s. :-)
Paul Nathan
+8  A: 

Lisp

The Lisp programming language was originally created in 1958. The original design featured a powerful, yet simple syntax based on the S-expression which is a notation for an operation on a list, such as (op A B C ...). From this simple syntax, many powerful features were able to be added to the language.

One important aspect of Lisp's notation is that Lisp source code is structured as a list. This, effectively, allows Lisp programs to easily process Lisp code. Functions that accept Lisp code as input and return code as output are known as macros. Today, this capability is often used as a way to develop domain specific languages.

Beyond this one capability, one can typically find that "advanced" features in modern programming languages often have pre-existing equivalents in Lisp and/or the popular Lisp interpreters and compilers.

Doug
Macros and the (lack of) special syntax make Lisp a logical superset of other languages. They allow the language to be extended in very fundamental ways without rewriting the language itself. For example, when object orientation became popular, it was implemented as just another addon library for Lisp, built on top of existing implementations, whereas adding object orientation to C required a fundamental redesign of the language with new compilers or preprocessors to support special new syntactic forms. See also http://en.wikipedia.org/wiki/Greenspun%27s_Tenth_Rule
Paul Legato
Lisp is a brilliant innovation, but is it really **contemporary**? It's one of the oldest programming languages around today.
cartoonfox
@cartoonfox The question seems like one that is best answered through hind-sight. It's hard to say what new/recent technologies are ahead of their time until we see how the future plays out. That said, I might argue that Lisp is contemporary because it is still actively used and expanded.
Doug
+6  A: 

Continuous Integration.

Some of us have been cajoling others into using in for years (decades?), and yet I'm still introducing people to the concept today.

Jason
I have been refusing to work on projects without CI for about 8 years now (often that has meant I had to set it up). It shocks me a little each time I have to give the sales pitch, or re-explain it to someone. Early detection of small issues is the best prevention against big issues.
Jason
+5  A: 

Thin clients. In 1990's the network wasn't just fast enough. Now with fast internet access, cloud computing and portable hardware, they made impressive come back.

vartec
+2  A: 

The Semantic Web. It's almost a decade now, and yet little has been done to actually embrace that technology. Only now social networks are starting to use that for discovery protocols, but it's still a very small subset of semantic web.

vartec
+2  A: 

I'd definitely vote for TeX and even more for METAFONT. The "hackish" nature of the tools, their mathematical sophistication and, in METAFONTs case, inability to produce output that can be consumed directly by contemporary programs hampered their adoption.

Noufal Ibrahim
+1  A: 

I would suggest that Larry Page's Page Rank algorithm (and pretty much everything in it's implementation) would be up there. Now it seems logical and intuitive, but way back in 1996/98 it would have been very insightful and miles ahead of what everyone else was thinking about.

Adrian Regan
A: 

I would say "Software Transactional Memory" or even better "Hardware Transactional Memory"

We are saying currently that Disk is the new Tape, and in the same view you can now say that RAM is the new Disk.

It can take 100's of time slower to get data from RAM than from register or the core caches. Caching is needed to keep the CPU's clocking.

Transaction based memory operations could help the CPU's and the compilers to optimize the pipelines/multithreading to keep data consistent, CPU's not waiting for data from RAM or L1 cache, etc...

Peter Tillemans
A: 

Declarative UIs

Maybe it's just my bias as a tester, but I like declarative UIs. Everybody knows HTML+CSS, but I like seeing WPF and XUL too. User interfaces defined in a declarative language instead of procedurally generated are just nicer to work with when testing. I look forward to seeing more desktop UIs written in a declarative language.

JamesH

related questions