The PC I use today is a million times faster than the one I started programming on when I was in college. And yet it always seems - I am always waiting for something to get done...

Back then it would take 30 seconds to 5 minutes to assemble a program and be ready to run it. Today - a million times faster! and I am still waiting 30 seconds to 5 minutes to build a program and run it...

Why is this? And why is it always this barely tolerable amount of time? Will the programs we create in the future require a year or more's worth of current PC cycles to compile?

Stuff i muse on ...... while waiting ....

+14  A: 

They already are...I've had simple timing loops that overflowed their counters, more than once, through many generations of computers.

And, they never will...Because a fast computer is always an opportunity to add things you didn't think were feasible previously.

+19  A: 

The faster computers get the more users want. What they were told can't be done in a reasonable amount of time now can be.

John Boker
Also known as latent demand.
+5  A: 

I can highly recommend a book called Designing and Engineering Time that discusses how as engineers we can build software that better meets the user's needs in terms of time.

Specifically to your question, I would say that while computers have gotten faster, so have our expectations of what they will perform. Twenty years ago, compiling a 1000 file project would take a few hours, and it was the way things were so we tolerated it. Now we're used to compiling a 1000 file project taking say 20 seconds, so if it takes longer, we feel that it's excruciatingly long.

+2  A: 

Brute force is always a valid option. So until computing is either infinitely fast or expressible along some axis that is the equivalent, they can always be faster.

Although there are probably going to be other practical limits you hit way before then. Like the meta problem of communication.

+9  A: 

Back then, computers didn't have GUI's or the capabilities of now. Its a well spoken of fact that as computers have gotten faster, the programming profession has gotten lazier.

Now we have high level languages which abstract at the cost of performance, the immediate gain is faster/cheaper development time relying on more memory and CPU processing power to get us along.

"Its a well spoken of fact"...I don't think that's the phrase. I also don't agree with your statement. Abstractions allow us to create incredible capable systems that simply are not possible at the lower levels.
Michael Haren
@Michael I'm not arguing with you about the benefits of abstraction, BUT I will argue that programmers of 20 years ago spent a lot more time optimizing their code... now a days there is either no point to optimization due to the level of abstraction or worse less people really now how to optimize.
I agree with you but I don't think as a result of it we're lazier, it's just that we are less constrained in terms of hardware, so we can dedicate more time to business logic, usability, workflow and less to technical challenges.
lubos hasko
No, I'm definitely lazier. I almost didn't type this comment because it was going to take too much effort.
+5  A: 

In some cases it's just bloat, but in a lot of cases, we're simply doing more with computers because they're faster. Have more storage? Increase the video resolution. Have more processing power? Start running bioinformatics algorithms on the entire human genome instead of smaller genomes or small pieces of the human genome. Have more memory? Stuff your program with features like unlimited undo. Have more screen resolution? Pack your IDE with all kinds of side menus and stuff.

+1  A: 

Computers can only get so fast. The speed of light is the limit. After that the only way to get more speed is to make them larger. According to Raymond Kurzweil

The physical bottom limit to how small computer transistors can be shrunk is reached. From this moment onwards, computers can only be made more powerful if they are made larger in size.

David Basarab
@Longhorn213: Which is why we're now seeing more CPUs cores being put on each die and not seeing clock speeds increase much anymore.
This comment might be misleading if you skim it; we're nowhere close to the "speed-of-light" limit now in any sense (although other factors are impeding further miniaturization and transistor density.) Kurzweil's comment refers to a hypothetical point in time 40 years from now.
+1  A: 

As computers get faster, software gets better (i.e. advanced hi-res graphics, sounds) and offer richer functionality with demands for higher connectivity. Therefore there is a proportional relationship between the demands of software and the speed of computers. For example, the standard pc is considered extremely quick if you compare it to the ones used for windows 98 or windows 2000. But back then, those operating systems didnt require as much processing speed or memory. If you look at Vista, which can take up to more than 1GB of RAM just to load, then you can see that as processing speed increases, so do the demands of software.

But of course, computers are generally quicker these days, I used to have to wait up to 2 minutes for my computer to load up, now with windows xp my computer boots up in under 20 seconds.

Sometimes you can go too far with the bloat, which is what Microsoft has finally realized with regard to Windows. The hardware requirements for Windows 7 are actually significantly less than those of Vista.
thats a step in the right direction.
+1  A: 

Besides all the factors already pointed out, parsing is one of the slowest operations you can program. It just doesn't get branch predicted well, and highly pipelined processors today don't like it too much.

Edit: Since the OP was talking about assembling/compiling, thought I'd point this out.

I beg to differ. Parsing doesn't need to go any more than 2 orders of magnitude slower than just scanning the characters. Nearly all the performance problems I see are due to massively over-complicated data structures, put in because they would be "more efficient".
Mike Dunlavey
By all the above factors aside, I didn't mean to brush them off. A lot of it, as you pointed out, is bloat. Since the OP was talking about assembling/compiling, I figure to point out the interplay between parsing and modern procs.
I didn't mean to be a crank, really. It's just that these issues are hot buttons for me. I'm glad that "bloat" is coming to be recognized as a problem, but that doesn't seem to deter it. And while performance seems a simple subject, I'm amazed how it is misunderstood.
Mike Dunlavey
+13  A: 

Software expands to fill the available space (and time).

This sounds like a foolish answer - foolish but absolutely true.
Mike Dunlavey
+1 You said it brother!
Fake Code Monkey Rashid
+16  A: 

The fact of the matter is: computers have not gotten a million times faster.

Just because your CPU can execute a million times as many instructions per second as the CPUs of yesteryear doesn't mean that your computer's overall performance has increased by the same factor.

Memory speeds have not increased as much as CPU speeds. And hard drive speeds have not increased as much as memory speeds.

So anytime your computer needs data from RAM or the hard drive (especially when virtual memory gets paged to disk) you're wasting millions of cycles, while the CPU just sits there and waits.

Average hard drive seek time is around 8 milliseconds. During that interval, a 2.0 ghz CPU could have executed about 16 million instructions if the data had been available.

Of course, the CPU can switch threads and work on something else while it waits to receive data for your task. But then, what if the new thread also needs data from RAM or disk?

Also: the more different threads simultaneously executing, the less likely your data is to actually be in the CPU cache, causing retrieval from RAM or (even worse) paged in from the hard drive.

Memory bandwidth on the front-side-bus has increased considerably over the past few decades. But just like RAM and disk speed increases, the FSB memory bandwidth increases have been modest.

So, yeah sure, the CPU itself is rip-roaring fast. But if it can't find any data to work on, and if the pipes all get clogged during data transfer, the computer's apparent speed degrades to a tiny fraction of the CPU's true capability.

The speed increase is still pretty good on things besides CPU. I've gone from 110bps and 300bps dialup paper tape to however fast current storage and comm channels are. FWIW!
Mark Harrison
Happily, in two or three years once SSD tech finishes maturing, we can probably cross "hard drive speeds" off that "big bottleneck" list.
@mquander: Maybe in a few years we get MRAM in commercial quantities, and it will replace DRAM, SRAM and Flash altogether. :)@benjismith: Things are not as bad as you describe because of locality and caching in memory hierarchy.
+1  A: 

Raw speed and amount of storage has increased by a huge amount. Access to that storage has increased at a much slower rate. Operations doing a lot of random disk I/O will not reflect the massive gains in CPU speed.

+2  A: 

There will come a time when computers will no longer be measured by their speed or storage capacity. There will simply be "computers".

It would be an awkward question to ask how fast a machine was or how much it could hold because those kind of constraints aren't thought about any longer. Even a child's computer would be sufficiently fast enough to do nearly anything humans could throw at it, and it would have enough storage capacity to hold all of human knowledge with room to spare. The limitations of software will be almost entirely on our imagination and intelligence, not hardware.

+1  A: 

I've always wondered this myself.. For the computer-has-got-faster-but-stuff-has-become-more-intensive theory to be true, there is a way to prove it. If we can made a modern computer to load, say, a DOS program or perform a very simple task in no time at all, then the theory is true. Although, things may not be that simple. You see, as we go from 8-bit to 16-bit and 32-bit, there seems to be some kind of overhead introduced. Yes, it allows computer to address more memory, but it also kinda limit how fast "simple task" can be. Also there's the fact that the bottleneck has now switched to hard disk and memory bandwidth, rather than the CPU. However, is it true that, even if hard disk and memory bandwidth is not the bottleneck, your old program will run 2 times faster in a 64-bit 2Ghz CPU compared to a 8-bit 1Ghz CPU? I seriously doubt so. There are far too many causes.

Hao Wooi Lim
+2  A: 

I'd like to think there isn't a "too fast"

But I do remember once, upgrading my computer (many moons ago) and ran a favourite application. I clicked on the list and it scrolled to fast to use. The basic problem was the list had in the fast relied on a very slow processor, now the GUI designer needed to program for a fast one ... Thank god we are well past that sort of thing today.

My more profound experience though was seeing my grandfathers Mac Plus. I had had it at university about 15 years prior, and thought for a laugh I would start it up. As I sat down I though, OMG this is going to take ages to boot.

I turned on the power and "bing" I'm loaded at the home window ...

WOW, most computers today take a minute or so to book.

Next I double click on the "word" icon, same thought, OMG this will take an age ...

"Bing" there I was ready to write.

So my learning was, faster cpu's are meaningless if we bloat our software with unwanted noise and program lazily because we allow speed to a secondary issue.

Stephen Baugh
+1  A: 

For many users, computers are already "fast enough".

Search for tales of folks using old computer for servers, web browsing, word processing, etc. MS Office, for example, hit Feature Saturation for such a vast majority of users a LONG time ago, so "old" software, on "old" computers is more than adequate to fulfill the bulk of most normal folks tasks.

The two current "big" CPU syncs (for most) is basically the Web Browser and Flash, or video games.

Before Web browsers primary task was formatting layout. Now, they're more and more becoming Javascript runtime engines, that happen to render HTML, which can be really CPU intensive.

However, most 2-3 year old machines, especially with modern software, run even heavy JS browser apps quite well.

Notice that we have not seen a large bump recently in actual CPU speeds. Rather we are focused now on cores, power management and heat. This is another sign that the computer is "fast enough".

Now, certainly there will always be tasks that the computer isn't fast enough for. 3D rendering, movie rendering, vast scientific models, etc.

Plus with our ever larger appetite for creating and collecting more and more and more data has some impact on performance. But even that's more and more specialized in business and industry rather than at the consumer level. And at the consumer level it's more a storage issue than a processing issue (save for actual movement of data, say for copying).

A DVD ripped to a hard drive is large, but, today, not particularly taxing for modern systems.

Todays primary complaints about speed are I/O bandwidth mostly, and recently there was a blurb that of the folks in the US that don't have broadband high speed net connections, 1/3rd don't even want it. They don't see the value proposition of it.

So, for some cases, the answer is "no", computers will never be too fast. But for many, even today, computers are "fast enough".

Will Hartung
Re: broadband Internet: As the Internet becomes more and more saturated with the traffic that most broadband users demand, it's probably just as well that not everyone wants broadband. Otherwise, the 'Net would have been completely saturated 5 years ago instead of starting to max out next year.
You read right, next year. I've heard that traffic is so high that the infrastructure is no longer able to keep ahead of it, and that starting next year, you're going to start seeing brown-outs (sluggish behaviour) in entire regions (e.g., the North American eastern seaboard) during peak times.

We're still waiting.

We are far, far from any level of computational power that could be regarded as "too fast", whatever that means. We are still in the first phase: computers are clearly useful but have much potential that is not yet realized. As you noted, even though computing power has grown exponentially, we are still waiting for them all too often. Any time there's a visible delay, you want performance to improve, and we have visible delays all the time!

We want computers to do more.

There are also lots of things we want computers to do that we don't ask them to because they're not powerful enough yet. How fast do computers have to get before garbage collection is ubiquitous - when no one would say "I want to manage my own memory". How much faster do they have to get before we ask a garbage collector to manage non-memory resources, such as file handles and database connections?

Both of these things have to happen before I would say computers are "fast enough".

Wikipedia has an interesting article on "Technology Singularity", which it defines as "a theoretical future point of unprecedented technological progress, caused in part by the ability of machines to improve themselves using artificial intelligence." We'd have to get to this point to be able to consider computers "too fast", I think

Jay Bazuzi
+2  A: 

I think computers are already 'too fast'. Most users have far more computing power on their desk than they will ever reasonably need. I mean, seriously, how much power can it take to write a letter or read your email?

Faster speeds are useful for complex simulations and areas like machine vision, but I don't think home users (at least) need all the power afforded to them.

The huge availability of CPU cycles, RAM and disk storage have seen a massive surge in the complexity, and bloat of software to consume as much of those resources as possible. The "don't worry RAM will be cheaper and CPUs faster in 3 months" argument reigns supreme with a lot of application development work.

What would be more sensible is to create slower, more efficient computers and less bloated software that only performs the 10% subset of tasks that 90% of the users actually bother to use. There is actually a push toward that with the very power efficient 'netbooks' running cut down Linux operating systems with 'slow' processors in them.

Adam Hawes
Playing any new AAA game on the PC requires upgrading to an even more absurdly powerful machine. So we definitely aren't too fast yet
Robert Gould
And then the user is burning a thousand watts in some cases just to play a game... seems wasteful to me!
Adam Hawes
@Adam: you've made the point very well. I think nothing makes software fast like developing for a slow machine.
Mike Dunlavey
@Mike: Fast software isn't necessarily hard to write. Simply choosing the right data structures for your algorithms goes most of the way. Fast computers have masked the inefficiency of linear searches (something beginners do a lot of) and wasteful use of memory (copy instead of reference data)
Adam Hawes
+2  A: 

The day computers win the war against humanity and trap us in a matrix, computers will have been too fast

Robert Gould
+1  A: 

Yes, computers are way more powerful than they were 20 years ago. But the problems we are tackling with them have grown enormously too.

The problem is that as datasets grow, the difference between a good and a bad algorithm become ever more marked.

The raw datasets we deal with have arguably kept up with Moore's Law, and on top of that there's all the extra overhead we have today of higher level languages, bloated windowing systems and so on.

Let's say that two decades ago, an O(n^2) algorithm ran 10 times slower than an O(n) algorithm. If the dataset's grown 100 fold since then (a conservative increase!), the bad algorithm will now be 1000 times slower.

It's one of my pet peeves when developers produce Moore's Law like some sort of hubristic get out of jail free card, even with Moore's Law under serious threat.

A bad algorithm on a small dataset is an inconvenience. A bad algorithm on a large dataset is a disaster.

Alabaster Codify
+7  A: 

Henry Petroski is responsible for one of my favourite quotes:

The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry.

As an example of this, see this amusing comparison between an '86 Mac and a modern machine.

I don't see this changing any time soon.

That's an impressive comparison, I must say.
Arve Systad
He is exactly right, and I think it's high time we stop being "amused and amazed" by this and do something about it.
Mike Dunlavey
I'm going to take this opportunity to editorialize and state that I believe the person who wrote that article is an idiot.
Greg D
+1  A: 

Wirths law states that programs are getting slower faster than hardware is getting faster. In fairness, we can get a lot done more quickly if we worry less about optimization. I suppose a cost/benefit analysis is needed like most other things in programming.

Jason Baker
+4  A: 

Will the programs we create in the future require a year or more's worth of current PC cycles to compile?


When I first started programming on a 4 Mhz 8088, the very idea of doing real-time speech recognition on a home computer was quite laughable.

Now, it only gets a chuckle.

In the future, programming a computer will consist of the following: you raise an eyebrow. The computer, in response, will realize that the budget numbers for personal jetpack fuel algae look a little high, and it will begin searching for explainations in the global market and new suppliers, all while muttering apologies for not having its act together. But since the computer interface is recreating a perfect Andromeda Ascendant avatar, waiting doesn't upset you.

Jeffrey L Whitledge
that gave me a chuckle
+1  A: 

Computers can't be too fast for users, but for development they are already way too fast.

The best way to make software fast is to develop and test it on slow computers. That way, if something takes too long, you feel it, and you do something about it.

My boss keeps trying to get me to accept a faster development machine, and I keep resisting, for that exact reason.

Mike Dunlavey
I like your attitude. But couldn't you have 2 machines ? Build on the fast one, test on the slow one.
You're right in principle, but I have enough trouble managing (and traveling with) one machine, and my favorite method of finding performance problems is to just halt the progam under the IDE while it's being slow.
Mike Dunlavey
... and who wouldn't want their compiles to be faster, but the way I code it hasn't been a problem. I honestly don't know how to crank out KLOCs like some folks do, but I don't seem to have trouble getting my job done.
Mike Dunlavey
... it's a generation gap, I suppose.
Mike Dunlavey
+2  A: 

Computers will never be too fast as microsoft will think up ways to slow it down... like implementing a 3D interface to switch between applications, and you pay $200 for this alt-tab replacement that constantly nags you, and call it Vista.


Have you tried playing "Bouncing Babies" on a PC made in the last 10 years? You just can't win!


Will computers become too fast?

No. Either we'll stop developing them, or more likely, we make them do more and more. And more.

There is too much of an industry in place to allow such a scenario to happen, so we'll just find new things to use them for.


The ability to get used to a fast computer is always faster than what the industry can improve them.


IBM gives some very interesting facts about their Supercomputer (currently the world's fastest). http://www-03.ibm.com/press/us/en/pressrelease/24405.wss

One of them stating, "A complex physics calculation that will take Roadrunner one week to complete, would have taken the 1998 machine 20 years to finish."

Pretty Amazing stuff. By many estimates, including Ray Kurzweil, this is already reaching the power of the Human brain. Once we are able to develop intelligent computers which will be able to optimize themselves, we can only guess what kind of advances in computing we will see. Looking forward to it...


I use Linux, so my computer's been way too fast in order of magnitudes for some time. Even if I opened all the programs I can run simultaneously, I doubt it would slow things down- there's still RAM left over when I have 40 applications open at once.

So yeah, computers are already about two times too fast. Some of them are even too fast for Windows, which is saying something (Quad Cores with 8 GB of RAM will never be put to good use, no super-realistic game needs anywhere near that much).

So yeah, we've already gone overboard, but the good news is that, if the outrageous stuff is pretty cheap these days, the reasonable stuff is dirt cheap.