views:

627

answers:

2

Ive got a server that uses boost::asio which I wish to make multi-threaded.

The server can be broken down into several "areas", with the sockets starting in a connect area, then once connected to a client being moved to a authentication area (i.e. login or register) before then moving between various other parts of the server depedning on what the client is doing.

I don't particularly want to just use a thread pool on a single io_service for all the sockets, since a large number of locks will be required, especially in areas with a large amount of interaction with common resources. However, instead I want to give each server component (like say authentication) their own thread.

However I'm not sure on how to do this. I considered the idea of giving each component its own io_service, so it could use whatever threads it wanted, however sockets area tied to an io_service, and I'm not sure how to then move a clients socket from one component to another.

+2  A: 

First, I'd advocate considering the multi-process approach instead; it is a very straightforward, easy to reason about and debug, and easy to scale architecture.

A server design where you can scale horizontally - several instances of the server, where state within each does not need to be shared between servers (e.g. shared state can be in a common database (SQL, Voldemort (persistant) or Redis (sets and lists - very cool, I'm really excited about a persistent version), memcached (unreliable) or such) - is more easily scaleable.

You could, for example, have a single listener thread that balances between several server processes using UNIX sendmsg() to transfer the descriptor. This architecture would be straightforward to migrate to multi machine with hardware load-balancers later.

The area idea in the poster is intriguing. It could be that, rather than locking, you could do it all by message queues. Reason that disk IO - even with SSD and such - and the network are the real bottlenecks and it is not necessary to be as careful with CPU; the latencies of messages passing between threads is not such a big deal, and depending on your operating system the threads (or processes) could be scheduled to different cores in an SMP setup.

But ultimately, once you reach saturation, to scale up the area idea you need faster cores and not more of them. Here's an interesting monologue from one of our hosts about that.

Will
The thing is unlike a conventional server (which I would write that way), this system works around a group of clients (mayby 50 max in extreme cases) all working on a single bit of data (the "area" so to speak), hence the reason I only want one thread per area, so that those 50 clients never need to attain locks, but instead is automatically handled one at a time by the single thread.
Fire Lancer
There is very little file IO, most of it will be writing, which is easy to make asynchronous using a message system and separate "writer" thread.
Fire Lancer
I'm making a server right now in fact, where its a single threaded async design; I do at times need to do heavy lifting and when I do I fork/execv to do that and read in the results through a pipe (which I can do with my main event loop just as though it was an external connection)
Will
+2  A: 

You can solve this with asio::io_service::strand. Create a thread pool for io_service as usual. Once you've established a connection with the client, from there on wrap all async calls with a io_service::strand. One strand per client. This essentially guarantees that from the client's point of view it is single threaded.

caspin