views:

1453

answers:

6

We have a client/server communication system over UDP setup in windows. The problem we are facing is that when the throughput grows, packets are getting dropped. We suspect that this is due to the UDP receive buffer which is continuously being polled causing the buffer to be blocked and dropping any incoming packets. Is it possible that reading this buffer will cause incoming packets to be dropped? If so, what are the options to correct this? The system is written in C. Please let me know if this is too vague and I can try to provide more info. Thanks!

A: 

Not sure about this, but on windows, its not possible to poll the socket and cause a packet to drop. Windows collects the packets separately from your polling and it shouldn't cause any drops.

i am assuming your using select() to poll the socket ? As far as i know , cant cause a drop.

Andrew Keith
hmm..thanks for the response, I guess we need to further research the cause of this then
+2  A: 

Yes, the stack is allowed to drop packets -- silently, even -- when its buffers get too full. This is part of the nature of UDP, one of the bits of reliability you give up when you switch from TCP. You can either reinvent TCP -- poorly -- by adding retry logic, ACK packets, and such, or you can switch to something in-between like SCTP.

I believe there are ways to increase the stack's buffer sizes, but that's largely missing the point: if you aren't reading fast enough to keep buffer space available, making the buffers larger isn't going to change that, only put off the time it takes you to get into the situation. The proper solution is to make larger buffers within your own code, and move data from the stack's buffers into your program's buffer ASAP, where it can wait to be processed for arbitrarily long times.

Warren Young
A: 

The packets could be lost due to an increase in unrelated network traffic anywhere along the route, or full receive buffers. To mitigate this, you could increase the receive buffer size in Winsock.

Essentially, UDP is an unreliable protocol in the sense that packet delivery is not guaranteed and no error is returned to the sender on delivery failure. If you are worried about packet loss, it would be best to implement acknowledgment packets into your communication protocol, or to port it to a more reliable protocol like TCP. There really aren't any other truly reliable ways to prevent UDP packet loss.

Kevin Wellwood
+1  A: 

Is it possible that reading this buffer will cause incoming packets to be dropped?

Packets can be dropped if they're arriving faster than you read them.

If so, what are the options to correct this?

One option is to change the network protocol: use TCP, or implement some acknowledgement + 'flow control' using UDP.

Otherwise you need to see why you're not reading fast/often enough.

If the CPU is 100% utilitized then you need to do less work per packet or get a faster CPU (or use multithreading and more CPUs if you aren't already).

If the CPU is not 100%, then perhaps what's happening is:

  • You read a packet
  • You do some work, which takes x msec of real-time, some of which is spent blocked on some other I/O (so the CPU isn't busy, but it's not being used to read another packet)
  • During those x msec, a flood of packets arrive and some are dropped

A cure for this would be to change the threading.

Another possibility is to do several simultaneous reads from the socket (each of your reads provides a buffer into which a UDP packet can be received).

Another possibility is to see whether there's a (O/S-specific) configuration option to increase the number of received UDP packets which the network stack is willing to buffer until you try to read them.

ChrisW
+1  A: 

The default socket buffer size in Windows sockets is 8k, or 8192 bytes. Use the setsockopt Windows function to increase the size of the buffer (refer to the SO_RCVBUF option).

But beyond that, increasing the size of your receive buffer will only delay the time until packets get dropped again if you are not reading the packets fast enough.

Typically, you want two threads for this kind of situation.

The first thread exists solely to service the socket. In other words, the thread's sole purpose is to read a packet from the socket, add it to some kind of properly-synchronized shared data structure, signal that a packet has been received, and then read the next packet.

The second thread exists to process the received packets. It sits idle until the first thread signals a packet has been received. It then pulls the packet from the properly-synchronized shared data structure and processes it. It then waits to be signaled again.

As a test, try short-circuiting the full processing of your packets and just write a message to the console (or a file) each time a packet has been received. If you can successfully do this without dropping packets, then breaking your functionality into a "receiving" thread and a "processing" thread will help.

Matt Davis
A: 

First step, increase the receiver buffer size, Windows pretty much grants all reasonable size requests.

If that doesn't help, your consume code seems to have some fairly slow areas. I would use threading, e.g. with pthreads and utilize a producer consumer pattern to put the incoming datagram in a queue on another thread and then consume from there, so your receive calls don't block and the buffer does not run full

3rd step, modify your application level protocol, allow for batched packets and batch packets at the sender to reduce UDP header overhead from sending a lot of small packets.

4th step check your network gear, switches, etc. can give you detailed output about their traffic statistics, buffer overflows, etc. - if that is in issue get faster switches or possibly switch out a faulty one

... just fyi, I'm running UDP multicast traffic on our backend continuously at avg. ~30Mbit/sec with peaks a 70Mbit/s and my drop rate is bare nil

Tom Frey