views:

330

answers:

2

It's not a question really, i'm just looking for some guidelines :) I'm currently writing some abstract tcp server which should use as low number of threads as it can.

Currently it works this way. I have a thread doing listening and some worker threads. Listener thread is just sits and wait for clients to connect I expect to have a single listener thread per server instance. Worker threads are doing all read/write/processing job on clients socket.

So my problem is in building efficient worker process. And I came to some problem I can't really solve yet. Worker code is something like that(code is really simple just to show a place where i have my problem):

List<Socket> readSockets = new List<Socket>();
List<Socket> writeSockets = new List<Socket>();
List<Socket> errorSockets = new List<Socket>();

while( true ){
    Socket.Select( readSockets, writeSockets, errorSockets, 10 );

    foreach( readSocket in readSockets ){
        // do reading here
    }

    foreach( writeSocket in writeSockets ){
        // do writing here
    }

    // POINT2 and here's the problem i will describe below 
}

it works all smothly accept for 100% CPU utilization because of while loop being cycling all over again, if I have my clients doing send->receive->disconnect routine it's not that painful, but if I try to keep alive doing send->receive->send->receive all over again it really eats up all CPU. So my first idea was to put a sleep there, I check if all sockets have their data send and then putting Thread.Sleep in POINT2 just for 10ms, but this 10ms later on produces a huge delay of that 10ms when I want to receive next command from client socket.. For example if I don't try to "keep alive" commands are being executed within 10-15ms and with keep alive it becomes worse by atleast 10ms :(

Maybe it's just a poor architecture? What can be done so my processor won't get 100% utilization and my server to react on something appear in client socket as soon as possible? Maybe somebody can point a good example of nonblocking server and architecture it should maintain?

+4  A: 

Take a look at the TcpListener class first. It has a BeginAccept method that will not block, and will call one of your functions when someone connects.

Also take a look at the Socket class and its Begin methods. These work the same way. One of your functions (a callback function) is called whenever a certain event fires, then you get to handle that event. All the Begin methods are asynchronous, so they will not block and they shouldn't use 100% CPU either. Basically you want BeginReceive for reading and BeginSend for writing I believe.

You can find more on google by searching for these methods and async sockets tutorials. Here's how to implement a TCP client this way for example. It works basically the same way even for your server.

This way you don't need any infinite looping, it's all event-driven.

IVlad
it would produce a ton of threads, wouldn't it? I'm using loop so I can handle lots of sockets in one thread by reading/writing with a small buffer.
hoodoos
I suppose it would. Are you sure it's a problem? In any case, can you post your exact code that causes 100% CPU? One idea to avoid it is to have a special "keep-alive" message, something like the server sending "ping?" to the client **every x seconds** and expecting a "pong!" before the next "ping?" should be sent. If it doesn't come, assume a dropped connection. This way you don't have the sleep you're talking about. The keep alive is only done, say, every 60 seconds, not on and on. You could use an AutoResetEvent for example to signal when a keep-alive should be sent, depends on code I think
IVlad
+1  A: 

Are you creating a peer-to-peer application or a client server application? You got to consider how much data you are putting through the sockets as well.

Asynchronous BeginSend and BeginReceive is the way to go, you will need to implement the events but it's fast once you get it right.

Probably don't want to set your Send and Receive timeouts too high as well, but there should be a timeout so that if nothing is receive after a certain time, it will come out of the block and you can handle it there.

jwee
I'm creating server application currently. Finally I'm implemented some abstract worker which is working on abstract protocol chain with phases, and actually I managed to make it work nicely. Now I need to make it handle keep alive connections without eating so much CPU. Currently it handles like 250 concurrent connections. I think I will post code on google, so you can check it out. Thanks. Begin receive/send won't work in my case, since I want to make my application not thread hungry, try to handle 100 concurrent clients on 2 core CPU. you'll see that it eats lots of CPU on context switching
hoodoos