views:

210

answers:

3

Is it ever possible for the C send function to return zero when using TCP sockets? The man page just says that it will return the number of bytes sent, but I am not sure if it will just return -1 when it can't send any data.

+1  A: 

Well, there is always the case where you passed in zero as the number of bytes to send... in that case, "returning the number of bytes sent" would indicate that it should return zero bytes.

Probably best to handle the returns-zero case properly anyway; it can't hurt, and it might help.

Jeremy Friesner
+2  A: 

The answer to this may well be implementation dependent and therefore vary based on the operating system.

One circumstance where 0 would be expected, when you request a transmission of 0 bytes.

torak
+2  A: 

Yes, it can indeed return zero. I've seen this in the situation of massive data transfers where the other end is not keeping up.

In that case, the remote TCP stack buffers had filled up, the stack had notified the local end that it was to delay until some space was cleared out and the local buffers had filled up as well.

At that point, it's not technically an error (hence no -1 returned) but no data could be accepted by the local stack.

This flow control is a basic feature of TCP. Receivers send back a window size with each acknowledgement indicating how much data they can accept. Once this hits zero, the sender no longer transmits until told that it's okay. Note that this is the TCP stack doing this, al the application sees (eventually) is a zero return code from the send.

Once the window size has been zero for a while (by using a persist timer on the sender), then an error may be generated.

So it returns zero. Granted, it's an edge case but you always code for edge cases.

paxdiablo
How should I handle it? Should my program keep trying to send, or should it fail?
Adrian
You should keep trying to send since it's not an error condition. If it doesn't recover, then you'll eventually get back an error. That's the point where you should indicate failure. One thing you _may_ want to consider is to introduce a delay following a zero return code before retrying. That would give more time for a temporary problem to right itself. There are a number of strategies you could follow for that.
paxdiablo
wouldn't this only be in a non-blocking scenario?
jdizzle
_That_, I think, is implementation-dependent. I'm pretty certain Linux doesn't return until it's all sent (or an error is returned) but I know of at least one BSD where it may send less than what was requested. It's then up to the client code to resend the bit that didn't go out on the previous attempt.
paxdiablo
@paxdiablo: it is normal behavior when send() sends less than request. e.g. when portion was already sent and syscall got interrupted by signal. It can't return -1/EINTR since some of the input was already consumed. I guess return of 0 is of similar nature.
Dummy00001
@paxdiablo: I do not think your explanation is right. If the socket is in blocking mode, and the receiver reports its receive buffer is full, then send() simply blocks until it can send data again (or a fatal error/timeout occurs). If the socket is in non-blocking mode instead, then it returns immediately with -1 and an error code of EWOULDBLOCK. In my experience, a return value of 0 always means that either 0 bytes were passed to send(), or the other party has gracefully closed the socket (or at least called shutdown(0)).
Remy Lebeau - TeamB
@Remy, as I said, it depends on the implementattion. AIX4, where I saw the behaviour, is one example. Online docs seem to indicate FreeBSD is the same although the Linux docs state otherwise. Try it under FreeBSD with the receiving side not recv'ing the data and I suspect you'll eventually see a 0 return from a non-zero send. But regardless of that, since the docs state that it _can_ return zero, you should code for it, no matter what happens in reality. Behaviour may change in a future release or your code may be ported.
paxdiablo