tags:

views:

1375

answers:

5

Is there a standard call for flushing the transmit side of a POSIX socket all the way through to the remote end or does this need to be implemented as part of the user level protocol? I looked around the usual headers but couldn't find anything.

+4  A: 

For Unix-domain sockets, you can use fflush(), but I'm thinking you probably mean network sockets. There isn't really a concept of flushing those. The closest things are:

1) At the end of your session, calling shutdown(sock, SHUT_WR) to close out writes on the socket.

2) On TCP sockets, disabling the Nagle algorithm with sockopt TCP_NODELAY, which is generally a terrible idea that will not reliably do what you want, even if it seems to take care of it on initial investigation.

It's very likely that handling whatever issue is calling for a 'flush' at the user protocol level is going to be the right thing.

chaos
I don't see any reason why disabling the Nagle algorithm "is generally a terrible idea". If you know what it does, there are many application protocol situations where disabling Nagle is exactly what you want to do. I suspect you haven't had a situation where you really needed to do that or you don't understand what it really does. In other words, this feature is there for a reason and it can also be disabled for a very good reason.
Tall Jeff
I tried using TCP_NODELAY but i got the following error- ‘TCP_NODELAY’ was not declared in this scope. I used- setsockopt(sockfd, IPPROTO_TCP, TCP_NODELAY, (char *)
sana
You'll need to `#include <linux/tcp.h>` or whatever include file provides `TCP_NODELAY` on your system (try `fgrep -r 'define TCP_NODELAY' /usr/include`).
chaos
A: 

I think it would be extremely difficult, if not impossible to implement correctly. What is the meaning of "flush" in this context? Bytes transmitted to network? Bytes acknowledged by receiver's TCP stack? Bytes passed on to receiver's user-mode app? Bytes completely processed by user-mode app?

Looks like you need to do it at the app level...

Arkadiy
+3  A: 

There is no way that I am aware of in the standard TCP/IP socket interface to flush the data "all the way through to the remote end" and ensure it has actually been acknowledged.

Generally speaking, if your protocol has a need for "real-time" transfer of data, generally the best thing to do is to set the setsockopt() of TCP_NODELAY. This disables the Nagle algorithm in the protocol stack and write() or send() on the socket more directly maps to sends out onto the network....instead of implementing send hold offs waiting for more bytes to become available and using all the TCP level timers to decide when to send. NOTE: Turning off Nagle does not disable the TCP sliding window or anything, so it is always safe to do....but if you don't need the "real-time" properties, packet overhead can go up quite a bit.

Beyond that, if the normal TCP socket mechanisms don't fit your application, then generally you need to fall back to using UDP and building your own protocol features on the basic send/receive properties of UDP. This is very common when your protocol has special needs, but don't underestimate the complexity of doing this well and getting it all stable and functionally correct in all but relatively simple applications. As a starting point, a thorough study of TCP's design features will shed light on many of the issues that need to be considered.

Tall Jeff
A: 

TCP gives only best-effort delivery, so the act of having all the bytes leave Machine A is asynchronous with their all having been received at Machine B. The TCP/IP protocol stack knows, of course, but I don't know of any way to interrogate the TCP stack to find out if everything sent has been acknowledged.

By far the easiest way to handle the question is at the application level. Open a second TCP socket to act as a back channel and have the remote partner send you an acknowledgement that it has received the info you want. It will cost double but will be completely portable and will save you hours of programming time.

Norman Ramsey
I don't think you need a second socket....it would seem reasonable that the far end could send something back on the same socket, no?
Tall Jeff
A: 

No. You do not have access to removing memory from another system via the TCP protocol. The system on the other end blocks on a socket until data has been delivered up the stack. In the mean time the system is more than likely doing more important things like doing DNS lookups for the end user's bittorrent downloads. Once the data is seen on the socket and the processor can be interrupted to answer your programs request it will receive the data. It doesn't matter if you are using UDP or TCP, your program will block and wait (even if it is only a thread of your program) until the data becomes alive.

So, without the rambling, your request to flush the socket via some TCP method would actually cause "false interrupts" to be thrown in the receiving system. One of the above posts comments and says TCP_NODELAY which only causes your machine to not delay/block for the sending, it has nothing to do with the receiving end.

IOW, implement it in your application protocol via timestamps or some other magic method.

Preston