views:

27

answers:

2

Hi,

I've got a general question about the recv-function of winsock. I write a programm with a client/server-architecture where the client sends a camera image to the server and the server sends another image back to the client. The client sends serveral images until the program is closed. And the server will responds every received image with an other image.

Client (send) ----> Server (receive) ----> Server (send) ----> Client (receive) . ^______________LOOP_____________________________|

My problem is the servers receive function. If I call the receive function serveral times with a 512Byte large buffer, how do I know that I received the whole image? The examples I found just wait until the connection is closed (recv returns 0), but I want to receive serveral images without closing the connection. But if I receive until there aren't any bytes left in the buffer, the recv function will halt the thread until there are new bytes (but this will never happen because the server first want to send its own image back to the client before receiving the next one).

So is there any possibility to tell the server that the whole image was received so that it can send its own image back to the client without waiting for the next bytes or a closed connection?

I hope my question is clear

Thanks Ben

+3  A: 

Develop a protocol with a header which includes the size n of amount of data that the receiver has to expect. The receiver only reads the header PLUS n bytes (indicated by the header) from the TCP-stream. Then you can expect the next header. If you don't receive those n bytes, the transmission is incomplete.

In short you could define a message in your protocol as follows:

Message:

  • data length (32 bits, unsigned int)
  • data content
Robert
You could also use escape sequences - but using a header is going to be more efficient.
sje397
Does a simple struct including the size and the buffer fits for that problem or do I need serveral send/recv-calls to transfer one image?
ben
I'd even go with this: a header is easier to implement than iterating over the data content to process/strip escape sequences.
Robert
Make sure that any binary data going across the wire is in network order (look into htonl, ntohl etc). This is critical if your software will run on multiple platforms but is a good idea anyway even if it will only run on Windows.
Ferruccio
@ben: take a look at the code example of stijn (below this answer) - it provides a very good base for your next research.
Robert
@all: Thanks for your answers. They are very helpfully and I'm looking forward to solve my problem now :)
ben
@Ferrucio: personally I'd define the protocol to be in little-endian order and let those insanely few non-LE machines do conversion rather than penalizing the majority.
snemarch
@snemarch: a very plausible reason why someone should use LE today when designing protocols.
Robert
@robert: indeed - you still might want to write your source code to *handle* BE systems, but imho it makes a lot of sense to keep the protocol LE. Google's protobuf uses LE... :)
snemarch
+1  A: 

wrap your image data in a packet consisting of

  • a fixed-size header telling how many bytes will follow
  • the actual image data

in your receive code you first read the header part, the read the appropriate amount of data.

example code with error checking omitted; also you must always loop the actual recv call since data might arrive in pieces!

unsigned bytesExpected;
Image imgData;
while( !LoopMustStop )
{
  Read( sizeof( unsigned ), bytesExpected );
  Read( bytesExpected, imgData );
  Process( bytesExpected, imgData )
}
stijn
Thanks for the quick example but actually you should keep byte order and the header size in mind (`sizeof (unsigned)` could change depending on the compiler). Protocol design can consume some time.
Robert