Continuing along current trends we can expect our algorithms to run many times faster on Desktop Computers in 10 years (let's pick a number) 1024x faster.

Do you know of any algorithms that are within 1024 times of running on average Desktop Systems and that would dramatically change the kinds of software we can write there?

I have my thoughts, but I'm more interested in the community's ideas.

+17  A: 

I don't know for sure but I am pretty sure salesmen wouldnt have trouble traveling anymore :)

Not if there were 1024 as many cities to reach... =(
Zach Scrivena
You mean not if there were ~10 more cities (math?)
@1alstew1: Ah yes... that would be correct for exponential complexity. I was invoking my poetic license =P
Zach Scrivena
But.. (as xkcd has taught us) since eBay, the traveling salesman is O(1)
Jens Roland
it's not really about how many more cities, we worked on minimum paths that had about 500,000 nodes ( 1 node for each parcel of land) and that was just one part of the program
It's meant as a joke, dont over think it. :)
+39  A: 

Raytracing for games is certainly within three orders of magnitude of feasibility.

How is this beneficial? I'm asking because I don't know.
Jay Bazuzi
@Jay: much more realistic looking environments. Water and glass, as well as mirroring objects like metal will look much more realistic, especially being rendered in real-time.
To add to what dreamlax said, you would also have better lighting/shading in environments and faster particle systems to model smoke, fire, and explosions.
@Jay -
Raytracing per sé is overrated. It does nice reflections and hard shadows. Everything else is still hard.
@MadKeithV, the original answer would have been stronger if it said something more like "raytracing and optics modeling." You can get very impressive images with a combination of multisampled raytracing + radiosity.
Bob Cross
+6  A: 

First one quibble 1024 orders of magnitude would be 10^1024x faster, not 1024x.

Clearly certain of graphics processing (games!) would benefit. Also video and audio processing software would do well.

Factoring - and consequently hacking public/private keys would be quite a bit faster and the typical key size would be moving up commensurately.

A lot of applications/systems won't be bound by CPU (and aren't now) - disk and network IO will continue to be the throttle.

A factor of 1024 = 2^10, so public/private keys would get... ten bits longer.
+51  A: 

We'll just eat up those advances with more layers of indirection, as we've done in the past.

Isn't the literal sense of what you're saying: "The speed of the computer doesn't affect the software it runs" That just seems wrong to me. Maybe true, but still wrong.
Allain Lalonde
Those abstractions have value, right? So, what is that value?
Jay Bazuzi
It won't take as long to write the software.
too much php
yesterday we had c language printf which abstracted away hundreds of ASM LOC, today we have templates that abstract hundreds if not thousands of C LOC, tomorrow... write(book.[scifi, murder, mystery]).style(authors.gibson)?
Each layer attempts to hide complexity beneath it. They rarely do it completely (You still need to know about points for example), but the point is that no matter what coding is complex, it's just a matter of how much of it the programmer has to handle.
Allain Lalonde
But my layer is so much better than the last one!!!
If you really feel this way, why don't use use an Apple ][e with the software of its era?
Jay Bazuzi
@Jay - It's hard to pinpoint the value of an abstraction ;)
+6  A: 

Software that was a whole lot less efficient, yet still faster.

(Or is that just me?)

So, we just wouldn't quibble over efficiency? Would that mean faster development?
Allain Lalonde
That's pretty much the direction that advances have gone in the past few years. Of course, compiler optimization will also get better, so perhaps the tipping point will finally be reached where it's easier AND more efficient.
forget your silly algorithms, brute force ftw!
+40  A: 

Visual Studio C++ Intellisense will finally work.

Or maybe no one will care... who will be writing C++ in 10 years?
Jay Bazuzi
You're thinking of hard drive speed, not processing power :)
Jay Bazuzi, who will be writing C++ in 10 years? Are you kidding?
Jay Bazuzi
Why are they putting so much effort into developing C++0x?
Jay: Probably John Carmack will be. And anyone else working in a similar area where performance is important and their competitors can get a significant advantage by writing lower-level code.Intellisense still won't work though. Each release uses more CPU, but it doesn't actually get fixed.
haha, you nearly had me there. No, it will still be too slow.
Lasse V. Karlsen
If computers were 1024 times faster, we would lose the need to write in C++ "for extra performance."
There are applications that can use any amount of processing power. They will continue to be written in C++ or Quorf or whatever we're using by then.
David Thornley
+1 for Quorf. I anxiously await the day I can call myself a Quorf programmer.
What is Quorf? I tried to find it on Google. Is this actually a language or just some in-joke?
@Moose: Start learning it now. In 5 years, you'll find a job listing that requires "10+ years of Quorf experience".
Joey Adams
@Justice: But we wouldn't lose the need to maintain large amounts of code written in C++ :(
+2  A: 

Something that would use a BOINC-like project even more effectively, i.e. calculate and fight AIDS, cancer or solve other scientific problems.

I work on a scientific project that requires lots of computing power. It will certainly help. But breakthroughs happen because of thinking power, not computing power.
Jon Ericson
+17  A: 

According to past experience: 1024 slower and larger versions of what we do now.

Ten years ago, my desktop machine did almost exactly what my current machine does. The big difference is that it now connects to way more computers (network applications) and more smaller devices (mostly iPod and cameras). Also, games have more polygons. I don't see why that will change anytime soon.

Jon Ericson
It is also prettier now. Will we ever reach a cap on the number of cycles spent on prettyiness?
Hmm.. if I go back another 10 years, and compare the AmigaBASIC environment on my Amiga to something like Eclipse today, I would disagree to this statement...
+1: that's what I would have written.
Kaz Dragon
+1  A: 

We'll be solving NP-complete problems in a life time.

Hao Wooi Lim
NP-Complete is a bad word in these circles :) +1
Allain Lalonde
We can already solve them where N is small in reasonable amounts of time. This will only bump that N up a bit.
I don't think 1024x improvement is enough to do that....
Sam Schutte
If you add one city to a TSP with 1023 cities, you'll 1024 times longer.

entropy (second law of thermodynamics) will takeover. In simple terms it stated that an ordered state will go to a dis--ordered state. It is a one way street! The most cruel law there is.

+7  A: 

This guy has something to say about that in his book

Freely available here along with pretty much everything else the dude's written.

+6  A: 

Software will grow to consume its newfound resources. I, on the other hand will still be trying to resist feature creep and write the smallest, lightest application I can.

I'm pretty sure we'll see a lot more poorly written software by amateurs with the latest point and click RAD tool; simply because computers are so fast that the difference between efficient well written and inefficient software is undetectable. At the moment the difference is a matter of a second or two in response time for most things.

Adam Hawes
+23  A: 

The notion of Garbage Collection will be extended to non-memory resources, such as open files and database connections.

As programmers we'll use software that will analyze our code for bugs in ways we only do as humans today. Think what lint / compiler warnings / PreFAST do, but 1024x as much analysis.

As you type your code, you will get immediate feedback about not just compile errors, but also test results: you'll know immediately which unit tests you've broken.

Jay Bazuzi
I want this. I want this so badly.
Allain Lalonde
Why can't the Garbage Collection point be done right now? Doesn't seem like that'd be terribly intensive.
Allain Lalonde
So you're saying that the stupid paperclip is going to get a hell of a lot smarter? ..."It looks like you're writing a ransom note... need some help? You should curse more." -- Demetri Martin
Code analysis tools won't solve the halting problem no matter how many x1024s you apply to the computer.
I do NOT want real-time bug warnings as I type my code. When I'm ready to compile and see what compile errors I have, I'll compile and see what errors I have.
Brian Postow
@Brian: Apparently you are not using IntelliJ IDEA or a similar IDE, which shows the errors and warnings in an unintrusive way while editing.
Esko Luontola
No, Neo. I'm trying to tell you that when you're ready, you won't have to type.
+6  A: 

We'll be able to get that much closer to modeling every neuron in an entire human brain. See you soon, SkyNet!

My answer was going to be "still no closer to simulating the human brain". I'll look forward to seeing who is right!
Craig McQueen
+5  A: 

In writing real-time video automation software for the broadcast industry, the speed of the platform is a huge consideration. In fact we deliberately develop on under-spec'd machines so that our software will over-perform when deployed to end-consumers.

There is only so much that you can reasonably do in 1/25th of a second in the PAL world (or 1/30th for NTSC, or 1/50th for HD, etc). So this imposes a considerable limitation on implementing our creative ideas and moving the industry ahead.

The move from hardware to software-based CODECs has been the most significant advance for us in recent years and I can see this as a major factor as platforms get faster. Real-time rendering offers almost limitless possibilities for broadcasters, especially in the HD domain. But it is quite a way off being a reality.

The question is directed at 'Desktop Computers' and you may think that I am referring to Server technology, however it is much of the same when it comes to development.

To answer the question: Mashing large amounts of meta-data with media streams in real-time. That’s the holy grail for us and x1024 would certainly help. Just can’t wait 10 years for it.

Read "The Singularity is Near" by Ray Kurtzweil if you want a really in-depth answer to the question.

+1  A: 

Better OCR would mean that office software could become gigantic self-organizing searchable indexes of scanned documents. Imagine scanning something that looked like a speadsheet and then having it behave like one.

UML diagrams that get generated by examining a digital photograph of a whiteboard.

How 'bout ... the fabled "WinFS" that does away with a hierarchy of directories in favor of a self-organizing cloud of all of your documents/entities.

Allain Lalonde
Toshiba has software that you can buy that can enable an MFP to scan a document and convert it back to the original format, spreadsheet, HTML page, word document, colour or black and white. It's called Re-Rite. It is incredibly accurate and makes so few mistakes that it is well worth the price.
What you're talking about has little to do with speed of CPUs. A lot more interesting stuff than this is already out there.
I believe that additional computational power would allow this idea to be made better and better.
I also think that these things could be done with current processing power. Which raises another interesting question. If processing power hit a brick wall today, how much better could the software get on current hardware? My guess: Much, much, much better.
+10  A: 

Actually, I was asked a few years back to help [large semiconductor company] answer this exact question - what are the ('consumer') uses for what they termed 'terabit/terahertz computing'. Sadly, I don't have my notes with me right now, but if I recall correctly, some of the conclusions were:

  • Visual pattern recognition
  • Real-time raytracing
  • Data mining
  • Practical real-time video encoding/decoding
  • Lightning-fast 3D Solitaire (for Windows 360UltimateAwesomenessSponsoredByTonyHawk)
Jens Roland

"Hello World" in 3D!

+9  A: 

There are some hard tasks in the area of machine learning and artificial intelligence that will benefit from faster computers, for example:

  • machine translation
  • speech recognition
  • image processing, object recognition

To illustrate: Speech recognizers have a language model built in. For all words that the acoustic model hypothesizes it decides how likely they would actually be, given a number of previous words that have already been hypothesized. This search space of previous words becomes very big, and usually the language model will just look at the hypotheses for the previous two words, so it deals with trigrams. Otherwise search would take too long. But if speed (and hopefully, memory) won't be an issue it can take much more information into account (i.e. use 6-grams, for example, or models that are better than simple Markov Chains altogether), and make more accurate predictions about what was spoken into the microphone.

You could take longer word chains into account, although not that much longer (because of combinatorics), but it might not help. Already a large number of trigrams are not in a large database, and extending it to 6-grams would decrease the likelihood of a match.
David Thornley
You would in that case use deleted interpolation or something similar to smooth and combine n-grams of different lengths. Using 5-gram language models is actually becoming the standard in machine translation etc.
Google released counts for a 5-gram model, counted over the whole web:
+4  A: 

The C++ compiler would compile my whole project in a split second!

No, the STL and your project will both be 1024 times larger, and you'll be back where you started.
Joey Adams

Encryption algorithms! 1000 times faster computers would ease brute forcing and other exploitation methods for sure. Security specialists will sure be under more pressure than the specialists of other software areas.

No, not really. 15 years ago, we had a lot slower computers, and were equally secure as we were today. Better algorithms/faster cracking doesn't mean a thing, when you can always beat up a guy with a $5 dollar wrench to give you his password (
+31  A: 

Maybe the past can help tell us what the future will bring...

I did some back of the envelope math with numbers I found on the net (not sure how reliable the sources are, but I think it will support my argument)....

My first PC was a 486 33sx. On the wikipedia I found that a 486 33sx performed 27 mips.

My current PC is an i7 940. The best source I could find on i7 was for the 965... 76,383 mips

76,383/27 = 2829, so well over 2x the figure stated in the question (1024).

On my 486 I: Used Windows, played games, tinkered with development (I sucked back then, but I was young), and used AOL (this was the early 90s).

On my i7 I: Use Windows, play games, tinker with development (I suck a little less now), and browse the web.

I predict that in 10 years I will: Use Windows, play games, tinker with development (but be pretty good this time), and browse the web (from my flying car, which will be invented by 2015 according to Back to the Future II)

Giovanni Galbo
The flying car has been invented; in fact, there are at least a dozen different models to choose from at the moment. They are, however, far from mass-produced, and even further from street legal.
Jens Roland
+1  A: 

That depends a lot on your definition of "faster"

We've reached somewhat of a limit on cpu speed due to things such as the speed of light

We will start to see more many way processors which are executing many independent pipelines of execution simultaneously. Only programs designed to benefit from this new way of thinking about architecture will be able to fully benefit from the "faster" machines of the future.

Just my two cents..

Chris Ballance
Uh, "faster" in the sense that it takes less time for algorithms to complete. How else could you interpret this. Parallel or not? Not trying to be sarcastic, I just don't understand.
Allain Lalonde
What I mean is that clock speeds have maxed out for now. A sequential process can only be processed single core a fixed rate. To compensate for this, we will have to design algorithms with this in mind. Functional Programming languages will gain popularity for their native ability to harness this
Chris Ballance
@Allain Lalonde, you should pickup one of Patterson's books on Computer Architecture. Anyway, to be honest this question is not as interesting as the question about the possibilities of practical quantum computing.
sorry to almost copy you
+6  A: 

The SETI project will eventually find an extra-terrestrial intelligence.

Voted down? Wouldn't faster computers help the SETI project?
They have to be transmitting, we have to understand it, and it has to show up under fourier spectographic analysis. If these conditions aren't met, the number of packets we process is immaterial.
Maybe the SETI project will finally find intelligence on earth? ;-)
Maybe it'll make intelligence on Earth.
David Thornley
+2  A: 

I'm sorry to disagree, but this is infeasible any time soon.

However, we will probably have 1024x more cores and we should really think of ways to make good threaded programs that don't context switch too much and/or fight over data in memory.

Question doesn't actually say anything about how the speed increases will occur... be it Faster CPU, or more parallelization, or some completely new approach. So I'm not quite sure what you're disagreeing to.
Allain Lalonde
This will probably happen in the next 20 years. That seemed like a long time to me in 1989, but here we are.
+8  A: 

Wirth's law:

Software is getting slower more rapidly than hardware becomes faster.

Probably we can also throw in some Murphy's laws... and the future does not look bright anymore ;)

It will be much easier to write poor software with acceptable performance, but writing something that could really exploit the speed increase, multiple cores and other not-yet-known technologies will be kind of a rocket science.

I believe there is a moore's law about that as well (only less known)
+8  A: 

More layers of DRM

Funny that no one thought of this :)
Allain Lalonde
+2  A: 

With current trends in programming, "optimize costs versus code quantity cost", we would have 1024 times slower code but 1024 times more functionality!

Daniel T. Magnusson
+1  A: 

Much slower, much higher level languages. Development would be very fast and the results would be very CPU intensive.

Look at the original Doom source code, the one designed to run fast on 386 DOS machines with 16k of RAM. If we still wrote software like they did back then, the computer "experience" may well be 1024 times faster, but instead we've traded raw speed for dev speed.

Good deal? Maybe.

Um, yes, but look at the Doom graphics quality and update rate. They were amazing for the time, but to say that we haven't benefited from faster computers in this particular arena is a bit odd.
Dmitri Nesteruk

Wouldn't make any difference: my computer is not the bottleneck. It's waiting on I/O. Until/unless systems programmers start doing everything asynchronously, more speed isn't going to help me.

+1  A: 

Real-time video effects (this assumes that video resolution will not be 1024x larger, since we won't need that kind of resolution for everyday viewing)

real time ray tracing. I don't think we can do real time global illumination at 1024x...

simpler designs. we could remove a lot of old optimizations as certain technologies get faster (hint hint ssd)

more parallelism, since we are reaching physical limitations on single core processors

+1 for simpler designs: there's an amazing amount of time investment in performance-tuning (EG, Tracker -

Vastly better compilers.

I've been trying out code contracts for .net, and the main problem is that it takes forever to compile! Compile times went from seconds to minutes (on my old laptop). If computers were 1024 times faster, that issue would become irrelevant.

Also, instead of proving a method correct, a compiler could take a well-specified contract and generate a function to meet it. That would be an expensive operation by today's standards, but extremely useful. Sortof a compile-time prolog.

+2  A: 

JAVA will finally be fast

Oh, hold on a little. Let's not get ahead of ourselves :-)
omg.. we are in 2009, not 1996 :))))
+3  A: 

Virtualization will become absolutely commonplace for various purposes:

  • To emulate old hardware/OS environments
  • To reduce deplyoment problems: apps could be delivered as VM instances rather than installation packages
  • For security purposes: have each browser tab/window run in its own VM
Michael Borgwardt
spellcheck: deplyoment -> deployment
+5  A: 

Weather forecasting will include trillions of more data-points, but it will still rain on what was supposed to be a sunny day.

+4  A: 

Microsoft SharePoint will become usable soon after that time.

+2  A: 

Faster processing power is just that.

Give us more processing power, we'll write slower software :) An example : When I'm programming in Visual Studio and I type a dot ('.'), sometimes it takes up to 2 seconds for my computer to respond because it's looking for an auto-complete item in a big map. 2 Seconds on a current PC would mean 5 months on a 1978 computer. In that 2 seconds used for autocompletion someone else calculated an entire trajectory for the Apollo mission about 40 years ago ! :-D

Our bottleneck is (by now, taking a dual- or quad-core as 'normal' desktop) most of the times somewhere else. For example our processing power gives us the opportunity to calculate lots of stuff, but most of the time we need a lot of input/output data - so we're limited by ram/rom.

If I look at my (pretty average) desktop right now for example, it's a Q6600 at 4x 2.4Ghz. Pretty damn fast for someone who started on a ZX Spectrum :) But the best way to improve my systems speed right now is/was getting faster IO. I could buy a new CPU that would give me a 4 (cores) x 800Mhz speed bump, but I probably would hardly notice it with the current software.

Installing a striped raid hd-combo on the other hand improved the (perceived) speed of my machine a lot. If I'd be downloading at 8MB/sec, normally I'd notice my machine would be sluggish since it would spend a lot of time on IO on that single HD. Now it's divided by 2, which improves responsiveness a lot.

To cut my long story short, processing power is just one aspect - I'd rather see fast SSD's being introduced for the masses in the next year :)

Can't help but wonder if software systems would need to be torn down to be rebuilt in this way. And if we'll only see it used in specialized fields until a breakthrough OS gets developed.
Allain Lalonde
+1  A: 

Oh and :

"Are you Sarah Connor ?"

You made my day :D

Fast image indexing and searching, structure from motion and from static images in a fraction of the time it takes now, automatic essay-writing software, that would gather and combine information, inferring conclusions... quantum computer simulators with a small number of qubits (for illustrative purpose).


Evaluating the possibility of building a space elevator to link up the Space and the Earth. Here is my new idea.

To build the space elevator, a backbone is needed, I would like to suggest that hooking up the Earth and the Moon by a ~384,400km Carbon Nanotube cable.

With the super computer you mentioned, I would like to invite scientists, physicians & mathematician to simulate this crazy idea, to find out a way to work it out.

I think this is a key step for human being, and it is the very first step for connecting us to the Space.

For more details, see

p.s. I think 1024 times faster is still nothing. Human brain is still too slow to use computers. Our computer is 99% idle and waiting our commands.

Shivan Raptor
Great idea, and the carbon for the cable could be extracted from the atmosphere so we would go dangerously low on atmospheric CO2, global cooling would ensue, and all dolphins and whales* would beach themselves to escape from the freezing oceans. And we would need a 10^1024 times more powerful supercomputer to solve _that_ problem. (* = not that I don't like whale sashimi, but I prefer it to be hunted the traditional way; it is more fresh that way... ...whale sashimi from a beached whale sounds less appetizing)
KristoferA -
Can carbon in carbon dioxide be converted as a raw material of Carbon Nanotube? Actually, I know very little of the production of this tough material.
Shivan Raptor
I was just joking... :) I have no idea how carbon nanotubes are synthesized...
KristoferA -