tags:

views:

143

answers:

3
+7  A: 

This is what called "Nagle delay". This algorithm is waiting on TCP stack for more data to arrive before actually sending anything to network untill some timeout expires. So you should modify the Nagle timeout (http://fourier.su/index.php?topic=249.0) or disable Nagle delay at all (http://www.unixguide.net/network/socketfaq/2.16.shtml), so data will be sent per send call.

alemjerus
I am always fascinated by the knowledge of people here. Thank you very much. I will try that in a while :)
Klaus
+1 for pointing to the concept involved.
N 1.1
See my answer above.
Klaus
+3  A: 

You can use TCP_NODELAY socket option to force the data sending immediately.

Adil
+4  A: 

As others already replied the delays you see are due to TCP built-in Nagle algorithm, which can be disabled by setting TCP_NODELAY socket option.

I would like to point you to the fact that your socket communications are very inefficient due to constant connects and disconnects. Every time client connects to the server there's the three way handshake that takes place, and connection tear-down requires four packets to complete. Basically you lose most of the benefits of TCP but incur all of its drawbacks.

It would be much more efficient for each client to maintain persistent connection to the server. select(2), or even better, epoll(4) on Linux, or kqueue(2) on FreeBSD and Mac, are very convenient frameworks for handling IO on multiple sockets.

Nikolai N Fetissov
Not just the handshake and teardowns, but every time you form a new connection you get to go through slow-start *again*.
ephemient
Yep, that too. Thanks.
Nikolai N Fetissov
See my answer above.
Klaus
@Klaus, you need to make those changes on both sides, not just client. As is the server closes each connection after first read. Also, TCP does not guarantee a single write to the socket to correspond to a single read on the other side. Both are usually done in a loop. Then your application protocol needs to validate the bytes received.
Nikolai N Fetissov