views:

22

answers:

0

So I've got a multithreaded server app using non-blocking sockets listening for connections.

When a client connects, the client immediately sends a request and awaits a response. The server creates a new thread to handle the new connection, ensures that finishConnect is called and registers the new channel against the selector.

Now, this is the bit that gets me. It correctly detects the isReadable() status when the client has sent something - most of the time.

On occasion, it simply refuses to indicate isReadable() despite that I am absolutely sure that the client has in fact sent something. The client write method is reporting a number of bytes written, and ensuring the channel is flushed.

Seems to me like a race condition is occurring somewhere, but I can't for the life of me work out how - since the call to register correctly returns the keys interestOps as being set for OP_READ events.

Note: Adding a sleep between initial connection and performing I/O over the socket mitigates this issue to a high degree - but it's a kludge (at best) and still not 100% - eg: It seems to occur more frequently during periods of high load.

Any thoughts? Return codes/status that I'm not checking?

Much appreciated.