views:

553

answers:

6

Or do I have to implement it at the application level?

A: 

Why not just use a blocking socket?

This may be a bit dated, but here is some explanation on blocking/non-blocking and overlapping IO.

http://support.microsoft.com/kb/181611

It would help if we knew which language and OS you were using, BTW, to better show code snippets.

James Black
It won't help. Blocking sockets block until you successfully pass your data to the OS network stack not until you receive the other party's ack... I'm using C on Linux btw...
anon
But you can block until you receive the response, which could be the acknowledgment.
James Black
+2  A: 

The ack for the packet is at the transport layer (well below the application layer). You are not even guaranteed to have your entire buffer belong to its own packet on the network. What is it you are trying to do?

ezpz
+1  A: 

If you are talking about TCP, then no - no socket API I've seen allows you to do this.

You need to implement the ack in your application protocol if you need to be sure that the other end had received(and possibly processed) your data.

nos
+1  A: 

If you use setsockopt() to lower SO_SNDBUF to a value only large enough to send one packet, then the next send() on that socket should block until the previous packet is acknowledged. However, according to tcp(7), the socket buffer size must be set prior to listen()/connect().

mark4o
Of source the trick is knowing the exact size of a single packet (aka the MTU ). Either way this method will probably result in a performance decrease, but that may not matter to the OP.
James
If the messages sent by the application are a small fixed size, e.g. 512 bytes, and only one is sent at a time, then there is no need to know the MTU.
mark4o
Until you end up in a congested situation and the OS figures "lets try sending smaller segments".
nos
+1  A: 

The whole point of using TCP is to hide that individual ACK from applications. If you need to detect every ACK, then implement your own protocol using UDP or IP. TCP is probably an overkill. Or you can go up the stack and use a protocol like HTTP as a transport.

mizubasho
A: 

TCP will in general require you to synchronize the receiver and sender at the application level. Combinations of SO_SNDBUF tweaking or TCP_NODELAY alone are not likely solve the problem completely. This is because the amount of data that can be "in flight" before send() will block is more or less equal to the sum of:

  1. The data in the transmit side's send buffer, including small data fragments being delayed by Nagle's algorithm,
  2. The amount of data carried in unacknowledged in-flight packets, which varies with the congestion window (CWIN) and receive window (RWIN) sizes. The TCP sender continuously tunes the congestion window size to network conditions as TCP transitions between slow-start, congestion avoidance, fast-recovery, and fast-retransmit modes. And,
  3. Data in the receive side's receive buffer, for which the receiver's TCP stack will have already sent an ACK, but that the application has not yet seen.

To say it another way, after the receiver stops reading data from the socket, send() will only block when:

  1. The receiver's TCP receive buffer fills and TCP stops ACKing,
  2. The sender transmits unACKed data up to the congestion or receive window limit, and
  3. The sender's TCP send buffer fills or the sender application requests a send buffer flush.

The goal of the algorithms used in TCP is to create the effect of a flowing stream of bytes rather than a sequence of packets. In general it tries to hide as much as possible the fact that the transmission is quantized into packets at all, and most socket APIs reflect that. One reason for this is that sockets may not be implemented on top TCP (or indeed even IP) at all: consider a Unix domain socket, which uses the same API.

Attempting to rely on TCP's underlying implementation details for application behavior is generally not advisable. Stick to synchronizing at the application layer.

If latency is a concern in the situation where you're doing the synchronization, you may also want to read about interactions between Nagle's algorithm and delayed ACK that can introduce unnecessary delays in certain circumstances.

edarc