views:

60

answers:

2

I have a server that has several clients C1...Cn to each of which there is a TCP connection established. There are less than 10,000 clients.

The message protocol is request/response based, where the server sends a request to a client and then the client sends a response.

The server has several threads, T1...Tm, and each of these may send requests to any of the clients. I want to make sure that only one of these threads can send a request to a specific client at any one time, while the other threads wanting to send a request to the same client will have to wait.

I do not want to block threads from sending requests to different clients at the same time.

E.g. If T1 is sending a request to C3, another thread T2 should not be able to send anything to C3 until T1 has received its response.

I was thinking of using a simple lock statement on the socket:

lock (c3Socket)
{
    // Send request to C3
    // Get response from C3
}

I am using asynchronous sockets, so I may have to use Monitor instead:

Monitor.Enter(c3Socket); // Before calling .BeginReceive()

And

Monitor.Exit(c3Socket); // In .EndReceive

I am worried about stuff going wrong and not letting go of the monitor and therefore blocking all access to a client. I'm thinking that my heartbeat thread could use Monitor.TryEnter() with a timeout and throw out sockets that it cannot get the monitor for.

Would it make sense for me to make the Begin and End calls synchronous in order to be able to use the lock() statement? I know that I would be sacrificing concurrency for simplicity in this case, but it may be worth it.

Am I overlooking anything here? Any input appreciated.

+1  A: 

My answer here would be a state machine per socket. The states would be free and busy:

  • If socket is free, the sender thread would mark it busy and start sending to client and waiting for response.
  • You might want to setup a timeout on that wait just in case a client gets stuck somehow.
  • If the state is busy - the thread sleeps, waiting for signal.
  • When that client-related timeout expires - close the socket, the client is dead.
  • When a response is successfully received/parsed, mark the socket free again and signal/wakeup the waiting threads.
  • Only lock around socket state inquiry and manipulation, not the actual network IO. That means a lock per socket, plus some sort of wait primitive like a conditional variables (sorry, don't remember what's really available in .NET)

Hope this helps.

Nikolai N Fetissov
I am encapsulating the socket in an object already (which I've called SocketHolder), so that seems like an obvious spot to put the free/busy status. Thanks for the idea.
Lars A. Brekken
+1  A: 

You certainly can't use the locking approach that you've described. Since your system is primarily asynchronous, you can't know what thread operations will be running on. This means that you may call Exit on the wrong thread (and have a SynchronizationLockException thrown), or some other thread may call Enter and succeed even though that client is "in use", just because it happened to get the same thread that Enter was originally called on.

I'd agree with Nikolai that you need to hold some additional state alongside each socket to determine whether it is currently in use or not. You woud of course need locking to update this shared state.

Steve Strong
So you're saying that the thread that EndReceive returns on is different than the one that called BeginSend? I didn't even think of that, but that is a very good point.
Lars A. Brekken
It certainly can come back in on a different thread, your only option would be to somehow marshal the call back onto the thread that called BeginSend. To do so would of course mean keeping the BeginSend thread hanging around, at which point you've kind of lost the point of going async :)
Steve Strong
Definitely seems like a good idea to store it along with the socket. Thanks!
Lars A. Brekken