views:

91

answers:

3

I am working on a server client application right now (just for learning purposes), and am trying to get information to make a design decision regarding threads in this application.

Currently i have one thread in charge of all nonblocking io with the clients. When it receives any data, it sends it off to a worker thread that creates an "instruction set" out of those bytes, then acts on it accordingly. However, according to the instruction set it could act on any number of hundreds of objects (each object will cap out somewhere between 2 and 12 clients that can interact with it). I am trying to figure out if I should handle all of the instruction sets on that same thread, and just block while i handle each set, or if i should create separate threads for each object, and then pass each received instruction set off to the given objects thread for handling.

My question boils down to at what point (if any) is having more inactive threads that are waiting for data slow down the system compared to having one worker thread that handles all the data (and blocks while it is handling each instruction set).

If i created a separate thread for each object, then i am thinking it could increase concurrency as once it the main worker thread creates an instruction set, it can just pass it off to be handled and imeditally start working on the next instruction set.

However, i keep hearing about how creating and managing threads has an underlying cost to it, because the OS has to manage them. So if i created a thread for an object that could have at most 2 clients able to interact with it, would the underlying cost of managing it negate the concurrent benefit of it, sense only 2 clients could utilize that concurrency?

As always, any advice/articles are greatly appreciated :)

+3  A: 

I'd recommend following the example set by Java EE app servers.

Have a queue for incoming requests and a pool for handler threads. When a request comes in, have the controller get a handler thread out of the pool, take the request off the queue, and give it to the handler thread to process. When the thread is done, put it back into the pool.

If the number of requests is greater than the number of handler threads, the queue lets them accumulate and wait until a thread becomes available.

This design gives you two benefits:

  1. It lets you set the size of the handler thread pool and match it to your server resources
  2. It throttles the incoming requests when they exceed the pool capacity so you aren't blocking and waiting or losing requests.

Concurrency is your friend here. It'll help keep your server scalable.

duffymo
This is a much more elegant solution then what i was envisioning. Thanks :)
kyeana
A: 

If the threads are actually sleeping, the overhead cost should be nothing more that what it costs to start them in the first place. Sleeping threads have a fairly efficient method of staying asleep until they are needed: waiting for an interrupt. (note a common method of sleeping is to wake after an interrupt from the clock, which is how you can specify the amount of time it should sleep). If those threads are not taking advantage of the hardware because they are waking up off of something like a timer instead of something more specific to your program, the overhead could be astronomical with all of the context switches the processor would be forced to make, most notably emptying the cache.

Aviendha
While the cpu cost of a sleeping thread is nothing, the, memory cost could be quite high, if you have a large number of sleeping threads holding on to large chunks of storage the effect is prrety much the same as a memory leak.
James Anderson
A: 

Test it each way. You have to block sometimes, you can't just ram everything through unless you are sure of your computing power. The slower/less capable the machines the more blocking is going to be nessesary. Design your application to be adjustable to the situation. Better yet let it keep track of what is going on and let it adjust itself.

Dora