views:

433

answers:

11

I don’t want to make this subjective...

If I/O and other input/output related bottlenecks are not of concern then do we need to write multithreaded code? Theoretically the single threaded code will fare better since it will get all the CPU cycles. Right?

Would JavaScript of ActionScript have fared any better, had they been multithreaded?

I am just try to understand the real need for multithreading.

+11  A: 

I don't know if you have payed any attention to trends in hardware lately (last 5 years) but we are heading to a multicore world.

A general wake-up call was this "The free lunch is over" article.

On a dual core PC, a single-threaded app will only get half the CPU cycles. And CPUs are not getting faster anymore, that part of Moores law has died.

Henk Holterman
That is the only reason I can think of why should code multithreaded. Am I correct in my assumption?
Quintin Par
Not really, on a single core you use threading to make I/O parallel. On multicores you also want calculation in parallel.
Henk Holterman
Moore's Law only concerns the density of transistors on a chip. It seemed to apply to speed because transistors got faster as they got smaller, up until the interconnect delays became dominant. Multiple cores are an increase in the number of transistors, so Moore's Law is still in force.
Mike DeSimone
Aw Henk! Now it looks like I just copied your response (regarding the "free lunch" stuff).... I'll be stuck with low reputation i guess :)
S.C. Madsen
@Hank: Good point. People forget that I/O devices, which could even be considered their own "cores" (esp. with GPUs these days), can block a process just as well as another process could.
Mike DeSimone
@Mike your are very much correct, formally. But Moores law is frequently confused with the increase in speed.
Henk Holterman
@Mike I may be wrong, but isn't your first comment contradictory? There's a pretty big difference between number and density.
Wilduck
"On a dual core PC, a single-threaded app will only get half the CPU cycles." - not if you're running multiple CPU-bound apps. The other half of the CPU will be used quite nicely. Not every app has to be multithreaded for a good user experience in their environment if they're actually running more than one app at a time.
Jesse C. Slicer
@Wildduck: when you keep the area constant, there is a direct correlation between density and number.
Henk Holterman
@Jesse: The other core(s) don't have to be idle but a single-threaded app definitely only gets 1/N of the cycles (at best).
Henk Holterman
@Jesse: That doesn't help when you want to run one particular process as fast as possible. Single-threading in a multicore environment can work great on a server, but not necessarily on a desktop.
David Thornley
@Henk Duh. Thanks for that.
Wilduck
Henk, how many of the real-world applications can really be harmed by getting only 50% of CPU cycles? (or, 1% of CPU cycles?) The answer: very few. Most applications on my PC are UI or IO-bound, or worse, their performance is often limited by misused locks or lack of Async IO skills of the developer - which makes them not twice, but like a thousand times less responsive. Of all consumer apps I can think, only WinRAR and Word spellchecker could inherently benefit from extra CPU.
Pavel Radzivilovsky
@Pavel: In my applications, 90% or more is also I/O or user bound. But there are reporting and calculation parts that are CPU bound and used to profit, in the past, from faster hardware. Now you'll have to use parallelism.
Henk Holterman
+6  A: 

In the words of Herb Sutter The free lunch is over, i.e. the future performance path for computing will be in terms of more cores not higher clockspeeds. The thing is that adding more cores typically does not scale the performance of software that is not multithreaded, and even then it depends entirely on the correct use of multithreaded programming techniques, hence multithreading is a big deal.

Another obvious reason is maintaining a responsive GUI, when e.g. a click of a button initiates substantial computations, or I/O operations that may take a while, as you point out yourself.

S.C. Madsen
In the past, I was bit more curious about hardware than I am today. I remember the race between AMD and Intel to the first 1GHz (x86-compatible) chip. That was 2000, and 10 years later it seems we're at 2 or 3 GHz.
Aaron McDaid
The GHz wars pretty much ended when Intel found (on the Pentium 4) that pushing to 4 GHz and beyond just wasn't worth it. (The only people who could were using liquid cooling.) AMD could get comparable performance at half the speed and less power by simply doing more per clock cycle. It amuses me that, today, the tables seem to have turned, with Intel being more power efficient and AMD's chips running at 125 W...
Mike DeSimone
On PC's I also guess that the rise of GPU's did their part in obscuring the average consumers focus on GHz.
S.C. Madsen
I have a hunch that as the CPU cores continue to increase, GPUs will become a thing of the past... after all, they're basically just tons of extra cores with their own memory.
rmeador
@rmeador: I disagree, graphics is massively parallel and hence the inherent parallel GPU's will continue to have the advantage I think
S.C. Madsen
@rmeador: Nah, GPUs will still exist although they may get absorbed into the CPU core itself and become a sort of floating point vector coprocessor.
Zan Lynx
+3  A: 

Much of the multithreading is done just to make the programming model easier when doing blocking operations while maintaining concurrency in the program - sometimes languages/libraries/apis give you little other choice, or alternatives makes the programming model too hard and error prone.

Other than that the main benefit of multi threading is to take advantage of multiple CPUs/cores - one thread can only run at one processor/core at a time.

nos
+2  A: 

No. You can't continue to gain the new CPU cycles, because they exist on a different core and the core that your single-threaded app exists on is not going to get any faster. A multi-threaded app, on the other hand, will benefit from another core. Well-written parallel code can go up to about 95% faster- on a dual core, which is all the new CPUs in the last five years. That's double that again for a quad core. So while your single-threaded app isn't getting any more cycles than it did five years ago, my quad-threaded app has four times as many and is vastly outstripping yours in terms of response time and performance.

DeadMG
+1  A: 

Your question would be valid had we only had single cores. The things is though, we mostly have multicore CPU's these days. If you have a quadcore and write a single threaded program, you will have three cores which is not used by your program.

So actually you will have at most 25% of the CPU cycles and not 100%. Since the technology today is to add more cores and less clockspeed, threading will be more and more crucial for performance.

martiert
+7  A: 

The primary reason I use multithreading these days is to keep the UI responsive while the program does something time-consuming. Sure, it's not high-tech, but it keeps the users happy :-)

Bob Moore
+1, Multi-threading is useful for the same reason as multi-tasking between processes. Sometimes it makes sense for operations to occur in parallel and users don't like to have their work interrupted if they can keep doing things while waiting for a time-consuming operation to complete. The fact that multi-core CPUs are becoming more common is an added side benefit, though this also adds new complications (as it allows race conditions in flawed code to pop up that were previously avoided on single-core PCs.)
Dan Bryant
+2  A: 

That's kind of like asking whether a screwdriver is necessary if I only need to drive this nail. Multithreading is another tool in your toolbox to be used in situations that can benefit from it. It isn't necessarily appropriate in every programming situation.

Larry
+3  A: 

Most CPUs these days are multi-core. Put simply, that means they have several processors on the same chip.

If you only have a single thread, you can only use one of the cores - the other cores will either idle or be used for other tasks that are running. If you have multiple threads, each can run on its own core. You can divide your problem into X parts, and, assuming each part can run indepedently, you can finish the calculations in close to 1/Xth of the time it would normally take.

By definition, the fastest algorithm running in parallel will spend at least as much CPU time as the fastest sequential algorithm - that is, parallelizing does not decrease the amount of work required - but the work is distributed across several independent units, leading to a decrease in the real-time spent solving the problem. That means the user doesn't have to wait as long for the answer, and they can move on quicker.

10 years ago, when multi-core was unheard of, then it's true: you'd gain nothing if we disregard I/O delays, because there was only one unit to do the execution. However, the race to increase clock speeds has stopped; and we're instead looking at multi-core to increase the amount of computing power available. With companies like Intel looking at 80-core CPUs, it becomes more and more important that you look at parallelization to reduce the time solving a problem - if you only have a single thread, you can only use that one core, and the other 79 cores will be doing something else instead of helping you finish sooner.

Michael Madsen
+1  A: 

Here are some answers:

  • You write "If input/output related problems are not bottlenecks...". That's a big "if". Many programs do have issues like that, remembering that networking issues are included in "IO", and in those cases multithreading is clearly worthwhile. If you are writing one of those rare apps that does no IO and no communication then multithreading might not be an issue
  • "The single threaded code will get all the CPU cycles". Not necessarily. A multi-threaded code might well get more cycles than a single threaded app. These days an app is hardly ever the only app running on a system.
  • Multithreading allows you to take advantage of multicore systems, which are becoming almost universal these days.
  • Multithreading allows you to keep a GUI responsive while some action is taking place. Even if you don't want two user-initiated actions to be taking place simultaneously you might want the GUI to be able to repaint and respond to other events while a calculation is taking place.

So in short, yes there are applications that don't need multithreading, but they are fairly rare and becoming rarer.

DJClayworth
+1  A: 

First, modern processors have multiple cores, so a single thraed will never get all the CPU cycles. On a dualcore system, a single thread will utilize only half the CPU. On a 8-core CPU, it'll use only 1/8th.

So from a plain performance point of view, you need multiple threads to utilize the CPU.

Beyond that, some tasks are also easier to express using multithreading.

Some tasks are conceptually independent, and so it is more natural to code them as separate threads running in parallel, than to write a singlethreaded application which interleaves the two tasks and switches between them as necessary.

For example, you typically want the GUI of your application to stay responsive, even if pressing a button starts some CPU-heavy work process that might go for several minutes. In that time, you still want the GUI to work. The natural way to express this is to put the two tasks in separate threads.

jalf
A: 

Most of the answers here make the conclusion multicore => multithreading look inevitable. However, there is another way of utilizing multiple processors - multi-processing. On Linux especially, where, AFAIK, threads are implemented as just processes perhaps with some restrictions, and processes are cheap as opposed to Windows, there are good reasons to avoid multithreading. So, there are software architecture issues here that should not be neglected.

Of course, if the concurrent lines of execution (either threads or processes) need to operate on the common data, threads have an advantage. But this is also the main reason for headache with threads. Can such program be designed such that the pieces are as much autonomous and independent as possible, so we can use processes? Again, a software architecture issue.

I'd speculate that multi-threading today is what memory management was in the days of C:

  • it's quite hard to do it right, and quite easy to mess up.
  • thread-safety bugs, same as memory leaks, are nasty and hard to find

Finally, you may find this article interesting (follow this first link on the page). I admit that I've read only the abstract, though.

davka