views:

2713

answers:

3

Could anyone recommend me some documents to illustrate their differences please? I am always confused about the differents between multi-CPU, multi-core and hyper-thread, and pros/cons of each architecture in different scenarios.

EDIT: here is my current understanding after learning online and learning from others' comments, could anyone review comment please?

  1. I think hyper-thread is the most inferior technologies among them, but cheap. Its main ideas is duplicate register to save context switch time;
  2. Multi processor is better than hyper-thread, but since difference CPUs are on different chips, the communication between different CPUs are of longer latency than multi-core, since using multiple chips, more expensive and power consumption than multi-core;
  3. multi-core integrates all CPUs on a single chip, the latency of communication between different CPUs are greatly reduced compared with multi-processor. Since using one single chip to contain all CPUs, it is less power comsumption and less expensive than multi processor system.

thanks in advance, George

+13  A: 

Multi-CPU was the first version: You'd have one or more mainboards with one or more CPU chips on them. The main problem here was that the CPUs would have to expose some of their internal data to the other CPU so they wouldn't get in their way.

The next step was hyper-threading. One chip on the mainboard but it had some parts twice internally so it could execute two instructions at the same time.

The current development is multi-core. It's basically the original idea (several complete CPUs) but in a single chip. The advantage: Chip designers can easily put the additional wires for the sync signals into the chip (instead of having to route them out on a pin, then over the crowded mainboard and up into a second chip).

Super computers today are multi-cpu, multi-core: They have lots of mainboards with usually 2-4 CPUs on them, each CPU is multi-core and each has its own RAM.

[EDIT] You got that pretty much right. Just a few minor points:

  • Hyper-threading duplicates internal resources to reduce context switch time. Resources can be: Registers, arithmetic unit (so you can do several integer or even floating point calculations simultanously; or you can do an add and a multiply at the same time but not add and subtract), cache.

  • The main problem with multi-CPU is that code running on them will eventually access the RAM. There are N CPUs but only one bus to access the RAM. So you must have some hardware which makes sure that a) each CPU gets a fair amount of RAM access, b) that accesses to the same part of the RAM don't cause problems and c) most importantly, that CPU 2 will be notified when CPU 1 writes to some memory address which CPU 2 has in its internal cache. If that doesn't happen, CPU 2 will happily use the cached value, oblivious to the fact that it is outdated

    Just imagine you have tasks in a list and you want to spread them to all available CPUs. So CPU 1 will fetch the first element from the list and update the pointers. CPU 2 will do the same. For efficiency reasons, both CPUs will not only copy the few bytes into the cache but a whole "cache line" (whatever that may be). The assumption is that, when you read byte X, you'll soon read X+1, too.

    Now both CPUs have a copy of the memory in their cache. CPU 1 will then fetch the next item from the list. Without cache sync, it won't have noticed that CPU 2 has changed the list, too, and it will start to work on the same item as CPU 2.

    This is what effectively makes multi-CPU so complicated. Side effects of this can lead to a performance which is worse than what you'd get if the whole code ran only on a single CPU. The solution was multi-core: You can easily add as many wires as you need to synchronize the caches; you could even copy data from one cache to another (updating parts of a cache line without having to flush and reload it), etc. Or the cache logic could make sure that all CPUs get the same cache line when they access the same part of real RAM, simply blocking CPU 2 for a few nanoseconds until CPU 1 has made its changes.

[EDIT2] The main reason why multi-core is simpler than multi-cpu is that on a mainboard, you simply can't run all wires between the two chips which you'd need to make sync effective. Plus a signal only travels 30cm/ns tops (speed of light; in a wire, you usually have much less). And don't forget that, on a multi-layer mainboard, signals start to influence each other (crosstalk). We like to think that 0 is 0V and 1 is 5V but in reality, "0" is something between -0.5V (overdrive when dropping a line from 1->0) and .5V and "1" is anything above 0.8V.

If you have everything inside of a single chip, signals run much faster and you can have as many as you like (well, almost :). Also, signal crosstalk is much easier to control.

Aaron Digulla
Your notion of hyper-threading can be a bit misleading, as hyperthreading "just" simulates parallel execution of multiple threads - but mainly tries to improve multi-threaded performance by means of built-in CPU logic.
jcinacio
@jcinacio, does hyper-threading improves multi process performance? Why?
George2
@Aaron, 1. I have editted my current points in my original post after learning from you. Could you help to review and comment please? 2. What means "expose some of their internal data to the other CPU so they wouldn't get in their way." in your post?
George2
@Aaron, your reply so excellent, my last question, why do you say multi-core CPU solves the issue of CPU status synchornization/wait for RAM issues? I think if code logics are of the same, the synchronization and wait for RAM issue still exist. Any comments?
George2
@Aaron, great reply! I want to confirm with you that multi-core system still need to handle issues like synchronization of cache in different CPUs and wait for RAM issues -- just the same issues of multi-processor system, multi-core just make better performance for handling such issues. Correct?
George2
@George2: Exactly.
Aaron Digulla
Thanks Aaron, question answered!
George2
+2  A: 

In a nutshell: multi-CPU or multi-processor system has several processors. A multi-core system is a multi-processor system with several processors on the same die. In hyperthreading, multiple threads can run on the same processor (that is the context-switch time between these multiple threads is very small).

Multi-processors have been there for 30 years now but mostly in labs. Multi-core is the new popular multi-processor. Server processors nowadays implement hyperthreading along with multi-processors.

The wikipedia articles on these topics are quite illustrative.

Amit Kumar
Amit, 1. I have editted my current points in my original post after learning from you. Could you help to review and comment please? 2. What means die and tear in your post?
George2
tear->year (sorry), die: http://en.wikipedia.org/wiki/Die_(integrated_circuit)
Amit Kumar
Good to learn from you. Amit!
George2
+2  A: 

You can find some interesting articles about dual CPU, multi-core and hyper-threading on Intel's website or in a short article from Yale University.

I hope you find here all the information you need.

Bogdan Constantinescu
Bogdan, I have editted my current points in my original post. Could you help to review and comment please? I learned them after reading your recommended links.
George2
@George2 - Your edit is very true. That is the whole idea. :) The best thing you can get on a server is probably a multi-core multi-CPU
Bogdan Constantinescu