I am working on a server client application right now (just for learning purposes), and am trying to get information to make a design decision regarding threads in this application.
Currently i have one thread in charge of all nonblocking io with the clients. When it receives any data, it sends it off to a worker thread that creates an "instruction set" out of those bytes, then acts on it accordingly. However, according to the instruction set it could act on any number of hundreds of objects (each object will cap out somewhere between 2 and 12 clients that can interact with it). I am trying to figure out if I should handle all of the instruction sets on that same thread, and just block while i handle each set, or if i should create separate threads for each object, and then pass each received instruction set off to the given objects thread for handling.
My question boils down to at what point (if any) is having more inactive threads that are waiting for data slow down the system compared to having one worker thread that handles all the data (and blocks while it is handling each instruction set).
If i created a separate thread for each object, then i am thinking it could increase concurrency as once it the main worker thread creates an instruction set, it can just pass it off to be handled and imeditally start working on the next instruction set.
However, i keep hearing about how creating and managing threads has an underlying cost to it, because the OS has to manage them. So if i created a thread for an object that could have at most 2 clients able to interact with it, would the underlying cost of managing it negate the concurrent benefit of it, sense only 2 clients could utilize that concurrency?
As always, any advice/articles are greatly appreciated :)