views:

85

answers:

1

Hello all.

I am implementing a UDP data transfer thing. I have several questions about UDP buffer.

I am using UDPClient to do the UDP send / receive. and my broadband bandwidth is 150KB/s (bytes/s, not bps).

  1. I send out a 500B datagram out to 27 hosts

  2. 27 hosts send back 10KB datagram back if they receive.

  3. So, I should receive 27 responses, right? however, I only get averagely 8 - 12 instead.

  4. I then tried to reduce the size of the response down to 500B, yes, I receive all.

A thought of mine is that if all 27 hosts send back 10KB response at almost same time, the incoming traffic will be 270KB/s (likely), that exceeds my incoming bandwidth so loss happens. Am I right?

But I think even if the incoming traffic exceeds the bandwidth, is the Windows supposed to put the datagram in the buffer and wait for receive?

I then suspect that maybe the ReceiveBufferSize of my UdpClient is too small? by default, it is 8092B??

I don't know whether I am all right at these points. Please give me some help.

A: 

The UDP protocol does not guaratnee delivery, you should switch to TCP if you need to guaratnee packet delivery.

UDP is better suited to apps where loosing a packet is better than waiting for a packet to find its way to you. i.e. streaming media or somehting similar.

See wikipedia for more.

Nate Bross