views:

266

answers:

3

I have a client/server connection over a TCP socket, with the server writing to the client as fast as it can.

Looking over my network activity, the production client receives data at around 2.5 Mb/s.

A new lightweight client that I wrote to just read and benchmark the rate, has a rate of about 5.0Mb/s (Which is probably around the max speed the server can transmit).

I was wondering what governs the rates here, since the client sends no data to the server to tell it about any rate limits.

+5  A: 

In TCP it is the client. If server's TCP window is full - it needs to wait until more ACKs from client came. It is hidden from you inside the TCP stack, but TCP introduces guaranteed delivery, which also means that server can't send data faster than rate at which client is processing them.

Xeor
+2  A: 

TCP has flow control and it happens automatically. Read about it at http://en.wikipedia.org/wiki/Transmission%5FControl%5FProtocol#Flow%5Fcontrol

When the pipe fills due to flow control, the server I/O socket write operations won't complete untill the flow control is releaved.

Remus Rusanu
A: 

The server is writing data at 5.0Mb/s, but if your client is the bottleneck here then server has to wait before the data in "Sent Buffer" is completely sent to client, or enough space is released to put in more data.

As you said that the light weight client was able to receive at 5.0Mb/s, then it will be the post-receiving operations in your client that you have to check. If you are receiving data and then processing it before you read more data, then this might be the bottleneck.

It is better to receive data asynchronously, and as soon as one receive is complete, ask the client sockets to start receiving data again, while you process the received data in a separate thread pool thread. This way your client is always available to receive incomming data, and server can send it at full speed.

cornerback84