views:

48

answers:

3

I am going to write a TCP server, the client sends me XML message, I am wondering if below condition will happen and how to avoid that:

1) client sends <cmd ...></cmd> 2) sever is busy doing something 3) clients sends <cmd ...></cmd> 4) server does a recv() and put the string to buffer

Will the buffer be filled with <cmd ...></cmd><cmd ...></cmd> or even worse <cmd ...></cmd><cmd ... if my buffer is not big enough?

What I want is the TCP stack divides the messages to the same pieces as how clients sent them.

Is it doable?

+4  A: 

This is impossible to guarantee at the TCP level, since it only knows about streams.

Depending on the XML parser you're using, you should be able to feed it the stream and have it tell you when it has a complete object, leaving the second <cmd... in its buffer until it is closed also.

Knio
What are you saying? Since each client has a separate socket, the scenario described cannot happen.
S.Lott
I may have misunderstood the question, but I assumed the scenario is a single client and asking whether one call to send() maps directly to one call to recv()
Knio
Yes, I think that's exactly what the OP was asking.
caf
A: 

You often write clients in the plural form: are there several clients connecting to your server? In this case, each client should be using its own TCP stream, and the issue you are describing should never occur.

If the various commands are send from a single client, then you should write your client code so that it waits for the answer to a command before issuing the next one.

Didier Trosset
That's rubbish ;) You just need to make sure that your server handles message framing correctly. TCP is a stream of bytes, each read can return between 1 and the total number of bytes pending, all TCP code should handle this. If you have a message rather than stream based protocol on top of TCP then you need to implement some form of message framing so that you split the incoming stream into messages that you understand. There's absolutely no need to restrict the protocol to a strict single message response sequence. Even if you DO restrict your client you MUST deal with incomplete 'messages'.
Len Holgate
I admit. The second paragraph is an over simplification. But the first paragraph answered an important question of the OP.
Didier Trosset
"The second paragraph is an over simplification"? How so? It's a common protocol. That's the way -- for example -- HTTP works.
S.Lott
The simplification is in the method for a client to wait for the answer to a request before issuing another answer. One can think of other protocols, where a client could issue request after request, and getting answers asynchronously. This would be a great gain in latency in the case the client needs to send lots of requests.
Didier Trosset
@Didier Trosset: Why make up a new protocol? The question seemed to be simple. Hence the second paragraph can't be an "over simplification". It **was** simple. Why invent complexity? I thought your answer to be quite good without all the waffling about "over simplification".
S.Lott
+3  A: 

You need a higher order protocol to delineate message boundaries as you desire. And there are plenty to choose from including the one that you invent yourself.

msw
Actually, the problem is not a data serialization format problem since it appears that the OP is already using something XML like (XML is one of the proposed formats from that link) the problem is data transport problem: how to know when a document is complete. HTTP is one possible solution (not included in your link, in fact most formats in that link assume transport over HTTP, not raw TCP).
slebetman