views:

433

answers:

3

Hi all; Below is the code fragment I have issue with socket programing. Here after select call, If I do not put a sleep on line 9, on Windows XP, 1 byte is received on line 11 (instead 4 byte is sent from server as integer), when I check xmlSize, it is set to 0. Because iResult is 1, execution continues and on line 15 second receive is called with xmlSize=0, and iResult is set to 0 and afterwards because iResult=0 connection is closed.

But on Windows 7 this did not happen, program happily read 4 bytes and continued normal execution. On XP however I put a sleep(I just made this up) and it worked, but Why??

What is the flaw in this code?

1   while(is_running())
2   {
3       FD_ZERO(&readfds);
4       FD_SET(server_socket, &readfds);
5       iResult = select(server_socket+1, &readfds, NULL, NULL, &tv);
6       if  (!(iResult != SOCKET_ERROR && FD_ISSET(server_socket, &readfds) )) {
7           continue;
8       }
9       Sleep(500); // This Sleep is not required on Windows 7, but is required on 10 XP but WHY? 
11      iResult = recv(server_socket, (char *)&xmlSize, sizeof(xmlSize), 0);
12      xmlSize = htonl(xmlSize);
13      if ( iResult > 0 ){
13          printf("Bytes received: %d, XML Size:%d", iResult, xmlSize);
14          
15          iResult = recv(server_socket, xml, xmlSize, 0);
16          if ( iResult > 0 ){
17              xml[xmlSize] = '\0';
18              printf("Bytes received: %d", iResult);              
19              operation_connection(xml);
20          }
21          else if ( iResult == 0 ){
22              printf(LL_INTERR, CLOG("Connection closed"));
23              break;
24          }
25          else{
26              printf("recv failed with error: %d", WSAGetLastError());
27              break;
28          }
29      }
30      else if ( iResult == 0 ){
31          printf(LL_INTERR, CLOG("Connection closed"));   
32          break;
33      }
34      else{
35          printf("recv failed with error: %d", WSAGetLastError());
36          break;
37      }
38  }
+7  A: 

If this is a TCP socket, you shouldn't care. The socket delivers a stream, it's doesn't correspond in any way or fashion to the size of the original write()s to the other end.

It could deliver a megabyte as one million 1-byte read()s, or as a single 1MB one, or any combination in between.

If you depend on the size of the delivered data "chunks" for a TCP connection, you're doing it wrong.

If you need some kind of message separator, then explicitly design one into your protocol, the way carriage return+linefeed is used by e.g. HTTP. If your protocol is ASCII so you can't use these particular bytes to separate messages, there are of two two classic approaches:

  • Use some other byte sequence, perhaps ASCII 0x1E, the "record separator".
  • Escape the CR+LF when they're contained in the message, to make "plain" ones work as separators. This would be the better solution if your protocol "wants" to be text.

Another approach is to explicitly encode the length of each message in the stream itself, preferably as a prefix so you know how much data to expect.

unwind
Yes indeed I depend on chunks, but a separator like but CR or LF is not an option for me because my message contains CR LF. So can you suggest another approach?
whoi
The use of a header containing a magic and a data-size is a much used technique if a separator can't be used. The header can contain different fields, like a message-type, message-id,...
stefaanv
@stefaanv: True, I edited. Thanks!
unwind
+3  A: 

See this other SO question for an answer and code example:

Read from socket: Is it guaranteed to at least get x bytes?

Robert S. Barnes
A: 

You need to use message framing to delineate your messages.

Here is an example of a tcp client and server implemented using message framing. Although it is long winded, you can get a gist of the concept, esp in the Server() method.

feroze