Or do I have to implement it at the application level?
views:
553answers:
6Why not just use a blocking socket?
This may be a bit dated, but here is some explanation on blocking/non-blocking and overlapping IO.
http://support.microsoft.com/kb/181611
It would help if we knew which language and OS you were using, BTW, to better show code snippets.
The ack for the packet is at the transport layer (well below the application layer). You are not even guaranteed to have your entire buffer belong to its own packet on the network. What is it you are trying to do?
If you are talking about TCP, then no - no socket API I've seen allows you to do this.
You need to implement the ack in your application protocol if you need to be sure that the other end had received(and possibly processed) your data.
If you use setsockopt()
to lower SO_SNDBUF
to a value only large enough to send one packet, then the next send()
on that socket should block until the previous packet is acknowledged. However, according to tcp(7)
, the socket buffer size must be set prior to listen()
/connect()
.
The whole point of using TCP is to hide that individual ACK from applications. If you need to detect every ACK, then implement your own protocol using UDP or IP. TCP is probably an overkill. Or you can go up the stack and use a protocol like HTTP as a transport.
TCP will in general require you to synchronize the receiver and sender at the application level. Combinations of SO_SNDBUF
tweaking or TCP_NODELAY
alone are not likely solve the problem completely. This is because the amount of data that can be "in flight" before send()
will block is more or less equal to the sum of:
- The data in the transmit side's send buffer, including small data fragments being delayed by Nagle's algorithm,
- The amount of data carried in unacknowledged in-flight packets, which varies with the congestion window (
CWIN
) and receive window (RWIN
) sizes. The TCP sender continuously tunes the congestion window size to network conditions as TCP transitions between slow-start, congestion avoidance, fast-recovery, and fast-retransmit modes. And, - Data in the receive side's receive buffer, for which the receiver's TCP stack will have already sent an
ACK
, but that the application has not yet seen.
To say it another way, after the receiver stops reading data from the socket, send()
will only block when:
- The receiver's TCP receive buffer fills and TCP stops
ACK
ing, - The sender transmits un
ACK
ed data up to the congestion or receive window limit, and - The sender's TCP send buffer fills or the sender application requests a send buffer flush.
The goal of the algorithms used in TCP is to create the effect of a flowing stream of bytes rather than a sequence of packets. In general it tries to hide as much as possible the fact that the transmission is quantized into packets at all, and most socket APIs reflect that. One reason for this is that sockets may not be implemented on top TCP (or indeed even IP) at all: consider a Unix domain socket, which uses the same API.
Attempting to rely on TCP's underlying implementation details for application behavior is generally not advisable. Stick to synchronizing at the application layer.
If latency is a concern in the situation where you're doing the synchronization, you may also want to read about interactions between Nagle's algorithm and delayed ACK
that can introduce unnecessary delays in certain circumstances.