Underlying the TCP transport stack are a number of buffer limits sometimes documented by their authors. On WinXP SP3 I've run into one of these, I think, and can't figure out why.
I have implemented a simple client to get data from a server (written by a colleague in Java). The protocol is to write the length of the data (in network order) in four bytes and then the data. The server writes the data to the TCP stream in 1024 byte blocks. The client correctly receives the length of the data buffer, allocates memory and repeatedly calls recv in a loop to get all of the data:
unsigned int TCP_BlockSize = 4096;
unsigned int len;
int result;
...code to request len...
unsigned char *buf = new unsigned char[len];
if( len > TCP_BlockSize)
{
Uint32 currentLen = 0;
result = 0;
Uint32 failCount = 0;
while( currentLen < len && result >= 0)
{
result = recv( sock, buf + currentLen, TCP_BlockSize );
if( result > 0 )
{
currentLen = currentLen + result;
}
else
{
break;
}
}
}
If I set TCP_BlockSize to 4095 or below, all is good and I can receive multi-megabyte transmissions. When I try 4096 size receive blocks, the last request for the remaining data, which is len - currentLen < TCP_BlockSize, always fails with a return value of -1, and errno = 0. I tried a few experiments like trimming the size of the data transmitted and somewhere between 815054 and 834246 bytes everything goes boom for 4096 byte receive blocks.
One other detail: the server closes the socket after sending the last byte. Which begs the question, why wouldn't the remaining data be returned? It feels like a defect to not return -1 from recv until the stream is then empty and closed, as it is ambiguous when the stream is not empty and closed to receive -1 from recv.
So how do I get the last of data?