views:

875

answers:

17

This was a question in one of my CS textbooks. I am at a loss. I don't see why it necessarily would lead to parallel computing. Anyone wanna point me in the right direction?

A: 

Because orthogonal computing has failed. We should go quantum.

Jacek Ławrynowicz
Well, failed is a strong word :-)
RussellH
yeah, i know. it's not constructive :)
Jacek Ławrynowicz
+2  A: 

Increasing the speed of processors would make the operating temperature so extremely high it would burn a hole in your desk. The makers of the chips are running up against certain limitations they can't get around... like the speed of light, for instance. Parallel computing will allow them to speed up the computers without starting a fire.

David Morton
How is the speed of light involved?
Mo Flanagan
You have to distribute the clock signal around the chip and keep it synchronized. It takes finite time to get from one side of the chip to the other. As clocks get faster, it makes a difference.
RussellH
In all honesty, I'm not positive. I'm not a physicist, but Stephen Hawking is. http://blog.wired.com/business/2007/09/idf-gordon-mo-1.html
David Morton
@Mo At high clock speeds, the speed of light becomes a limiting factor. Roughly speaking, at the speed of light 1 nanosecond is a foot (30cm).
Bevan
A: 

I honestly don't really know, but my guess would be that transistors at some point could get no smaller requiring processing power to be spread out in parallel.

Carter
Transistors are getting smaller at the same rate.
Mo Flanagan
Yeah... I guess I'm not sure what you are getting at though.
Carter
+2  A: 

Transistors and cpus and whatnot are getting smaller and smaller and faster and faster. Alas, the heat and voltage costs for computing are going up. The heat and voltage issues are as much of a concern as the actual physical size minimums. A 100ghz chip would suck up too much voltage and get too hot but 100 1ghz chips would have less of an issue with this.

Brian
+4  A: 

That is an odd question. Moore's law doesn't necessitate anything it is just an observation of the progression of computing power, it doesn't dictate that it must increase at a certain rate.

JohnFx
I see your point, but I think it can be safely assumed that the question really is "If the trend that Moore's Law describes is to be followed for a really long time, if not indefinitely, why is parallel processing necessary."
Carter
Under that interpretation, I agree with Andrew's answer.
JohnFx
+6  A: 

Moore's law describes the trend that performance of chips effectively doubles due to the addition of more transistors to a circuit board.

Since devices are not increasing in size (if anything the reverse is true) then clearly the space for these additional transistors only becomes available due to chip technology becoming smaller and manufacturing becoming ever better.

At somepoint however you reach the stage where transistors cannot be minimized any further. It also becomes impossible to increase the size of chips beyond a certain point due to the amount of heat generated and the manufacturing costs involved.

These limits necessitate a means of increasing performance beyond simply producing more complex chips.

One such method is to employ cheaper and less complex chips in parallel architectures, another is to move away from the traditional integrated chip to something like quantum computing - which by it's very definition is parallel processing.

It's worth noting that the title of this question relates more to the observed results of the law (performance increase) rather than the actual law itself which was largely an observation about transistor count.

Andrew Grant
The question was "why it would lead to parallel computing". You do not answer that. And I'm astonished that parallel processing should work without an increase in size - so the size of a single transistor can't be the answer why parallel processing is necessary.
Leonidas
The original version of Moore's law is not about the speed of circuits, just their size. But like harddisk density speed is quite often labeled as such. But all those exponential growth phenomena have their own cycle.
Henk Holterman
+11  A: 
Mo Flanagan
+5  A: 

I think it is a reference to the free lunch is over article

basically, the original version of Moore's law, about transistor density, still holds. But one important derived law, about processing speed doubling every xx months, has hit a wall.

So we are facing a future where processor speeds will go up only slightly but we will have more core's and cache to play with.

Henk Holterman
A: 

Moore's law necessitates parallel computing because Moore's law is on the verge of/is dead. So taking that into consideration, if it is becoming harder and harder to cram transistors onto an IC (due to some of the reasons noted elsewhere) then the remaining options are to add more processors ala Parallel processing or go Quantum.

Gavin Miller
+1  A: 

The real answer is completely un-technical, not that the hardware explanations aren't fantastic. It's that Moore's Law has become less and less of an observation, and more of an expectation. This expectation of computers growing exponentially has become the driving force of the industry, which necessitates all the parallelism.

TheMissingLINQ
+1  A: 

Moore's law says that the number of transistors in an IC relative to cost increases exponentially year on year.

Historically, this was partly due to a decrease in transistor size, and smaller transistors also switched faster. Because you got faster transistors in step with Moore's law, clock speed increased. So there's a confusion that say Moore's law means faster processors rather than just wider.

Heat dissipation caused the speed increase to top out at around 3 GHz for economically produced silicon.

So if you want more cheap computation, it's easier to add more, slower circuits. So the current state-of-the-art commodity processors are multi-core - they are getting wider, but no faster.

Graphene film transistors require less power, and are performing at around 30 GHz, with theoretical limits at around 0.6 THz.

When graphene technology matures to commodity level in a few years, expect there to be another sea change and no-one will care about using parallel cores for performance, and go back to narrow, fast cores. On the other hand, concurrent computing will still matter for the problems it is a natural fit for, so you'll still have to know how to handle more than one execution unit though.

Pete Kirkham
Good explanation of speed vs. density. Do you think graphene film transistors are suitable for mass production?
RussellH
+14  A: 

Moore's law just says that the number of transistors on a reasonably priced integrated circuit tends to double every 2 years.

Observations about speed or transistor density or die size are all somewhat orthogonal to the original observation.

Here's why I think Moore's law leads inevitably to parallel computing:

If you keep doubling the number of transistors, what are you going to do with them all?

  • More instructions!
  • Wider data types!
  • Floating Point Math!
  • More caches (L1, L2, L3)!
  • Micro Ops!
  • More pipeline stages!
  • Branch prediction!
  • Speculative execution!
  • Data Pre-Fetch!
  • Single Instruction Multiple Data!

Eventually, when you've implemented all the tricks you can think of to use all those extra transistors, you eventually think to yourself: why don't we just do all those cool tricks TWICE on the came chip?

Bada bing. Bada boom. Multicore is inevitable.


Incidentally, I think the current trend of CPUs with multiple identical CPU cores will eventually subside as well, and the real processors of the future will have a single master core, a collection of general purpose cores, and a collection of special purpose coprocessors (like a graphics card, but on-die with the CPU and caches).

The IBM Cell processor (in the PS3) is already somewhat like this. It has one master core and seven "synergistic processing units".

benjismith
A: 

It's because we're all addicted to increasing speed in our processors. Years of conditioning have led us to expect more processing power, year after year. But the physical constraints caused by densely packed transistors have finally put a limit on clock speeds, so increases have to come from a different perspective.

It doesn't have to be this way. The success of the Intel Atom processor shows that processors could just get smaller and cheaper instead. The processor companies will try to keep us on the "bigger, faster" treadmill though, to keep their profits up. And we'll be willing participants, because we'll always find a way to use more power.

Mark Ransom
A: 

Moore's law still holds. Transistor counts are still increasing. The problem is figuring out something useful to do with all those transistors. We can't just keep increasing the instruction level parallelism by making pipelines deeper and wider because the circuitry necessary to prove independence between instructions scales terribly in the number of instructions you need to prove independence of. We can't just keep cranking up clock speeds because of heat. We could just keep increasing cache size, but we've hit a point of diminishing returns here. The only use left for the transistors seems to be putting more cores on a chip, which means that the engineer's job of figuring out what to do with the transistors is just pushed up the abstraction ladder, and now programmers have to figure out what to do with all those cores.

dsimcha
+2  A: 

Interestingly, the idea proposed in the question that parallel computing is "necessitated" is thrown into question by Amdahl's Law, which basically says that having parallel processors will only get you so far unless 100% of your program is parallelizable (which is never the case in the real world).

For example, if you have a program which takes 20 minutes on one processor and is 50% parallelizable, and you buy a large number of processors to speed things up, your minimum time to run would still be over 10 minutes. This is ignoring the cost and other issues involved.

mandaleeka
A: 

I don't think Moores law necessitates parallel computing, but it does necessiate an eventual shift away from pure miniturization. Multiple solutions exist. One of them is Parallel computing, another is co-processing (which is realted, but not the same thing as parallel computing. co-processing is when you offload work to a special purpose CPU such as a GPU, DSP, etc..)

Mystere Man
A: 

A very good link explaining the parallel computing and its use is here.

HotTester

related questions