views:

91

answers:

9

Threads make the design, implementation and debugging of a program significantly more difficult.

Yet many people seem to think that every task in a program that can be threaded should be threaded, even on a single core system.

I can understand threading something like an MPEG2 decoder that's going to run on a multicore cpu ( which I've done ), but what can justify the significant development costs threading entails when you're talking about a single core system or even a multicore system if your task doesn't gain significant performance from a parallel implementation?

Or more succinctly, what kinds of non-performance related problems justify threading?

Edit

Well I just ran across one instance that's not CPU limited but threads make a big difference:

TCP, HTTP and the Multi-Threading Sweet Spot

Multiple threads are pretty useful when trying to max out your bandwidth to another peer over a high latency network connection. Non-blocking I/O would use significantly less local CPU resources, but would be much more difficult to design and implement.

+5  A: 

Performing a CPU intensive task without blocking the user interface, for example.

Konamiman
OK, but what about the model where core functionality and interface are in separate processes which communicate over some kind of IPC, sockets for instance. I know allot of open source projects use this model. I've worked on projects where core functionality and GUI where unnecessarily coupled to each other when they really should have been separate processes. I would think that having CPU intensive functionality in a separate process would make things like unit testing and Q/A much simpler. What are the arguments for keeping them in the same process as threads as opposed to spliting?
Robert S. Barnes
Hey, I just pointed *one possible scenario for using threading*, which is what the question is about. I have never claimed that it is the only option for non blocking the UI.
Konamiman
A: 

Yet many people seem to think that every task in a program that can be threaded should be threaded, even on a single core system.

"Many people"... Who?

Also from my experience many many programs that should be multithreaded aren't (especially games.. I have an i7 and yet most games still use only 1 of my cores), so I'm not sure what you're talking about. Definitely programs like calc.exe are not multithread (or, if they are, 1 thread does 99% of the work).

Performing a CPU intensive task without blocking the user interface, for example.

Yes, this is true but this is fairly easy to implement and it's not what the OP is referring to (since, in this case, 1 thread does almost all the work and you only need very few mutexes)

Andreas Bonini
+4  A: 

An interesting example is a webserver - you need to be able to handle multiple incoming connections that have nothing to do with each other.

Peter
That is one of the scenarios that I thought about. But traditionally you just fork with that kind of problem. As I understand it Apache 2 switched from forking to threading, but in practice how big is is their ROI on this since processes and threads are almost identical and equal in cost on Linux?
Robert S. Barnes
+2  A: 

what kinds of non-performance related problems justify threading?

Web applications are the classic example. Each user request is conceptually a new thread. Nothing to do with performance, it's just a natural fit for the design.

skaffman
Maybe, but the traditional solution to this problem is forking. At least on Linux where processes and threads have almost identical cost how much do you really gain from threading over forking?
Robert S. Barnes
Threading and forking are the same thing, conceptually. A thread is just a lightweight process, and a multithreaded webserver will fork a thread rather than a process.
skaffman
A thread runs in the same memory space as it's parent, whereas a forked process runs in it's own memory space, and you have to use pipes or other methods of inter process communication.
Chad Okere
If you're using cookies to map HTTP requests to stateful sessions, for example, a forked process would have to use IPC to get and update that state.
Jason Orendorff
+4  A: 

Any application in which you may be waiting around for a resource (for example, blocking I/O from network sockets or disk devices) can benefit from threading.

In that case the thread blocking on the slow operation can be put to sleep while other threads continue to run (including, under some operating systems, the GUI thread which, if the OS cannot contact it for a while, will offer the use the chance to destroy it, thinking it's deadlocked somehow).

So it's not just for multi-core machines at all.

paxdiablo
Maybe, but for I/O I can get asynchronous notification via signals and a signal handler which in my mind is much simpler than dealing with threads. http://en.wikipedia.org/wiki/Asynchronous_I/O#Signals_.28interrupts.29
Robert S. Barnes
There are limited things you can do in signal handlers (I tend to do nothing more than set flags for the real code). Using threads has no such limitations.
paxdiablo
+1  A: 

Here are a couple of specific and simple scenarios where I have launched threads...

  1. A long running report request by the user. When the report is submitted, it is placed in a queue to be processed by a separate thread. The user can then go on within the application and check back later to see the status of their report, they aren't left with a "Processing..." page or icon.

  2. A thread that iterates cache storage, removing data that has expired or no longer needed. The thread's job within the application is independent of any processing for a specific user, but part of the overall application run-time maintenance.

  3. although, not specifically a threading scenario, logging within our web site is handed off to a parallel process, so the throughput of the web site isn't hindered by the time it takes to record log data.

I agree that threading just for threadings sake isn't a good idea and it can introduce problems within your application if isn't done properly, but it is an extremely useful tool for solving some problems.

Joel Provost
+1  A: 

Whenever you need to call some external component (be it a database query, a 3. party library, an operating system primitive etc.) that only provides a synchronous/blocking interface or using the asynchronous interface not worth the extra trouble and pain - and you also need some form of concurrency - e.g. serving multiple clients in a server or keep the GUI still responsive.

nos
+2  A: 

Blocking code is usually much simpler to write and easier to read (and therefore maintain) than non-blocking code. Yet, using blocking code limits you to a single execution path and also locks out things like user interface (mentioned) and other IO ports. Threading is an elegant solution in these cases.

Another case when multithreading is to be considered is when you have several near-synchronous IO channels that should be managed: using multiple threads (and usually a local message queue) allows for much clearer code.

Stephane
+1  A: 

Well, how do you know if you're app is going to run on a multi-core system or not?

Beyond that, there are a lot of processes that take up time, but don't require the CPU. Such as writing to a disk or networking. Who wants to push a button in a GUI and then have to sit there and wait for a network connection. Even on a single core machine, having a separate IO thread greatly improves user experience. You always at least want a separate thread for the UI.

Chad Okere