views:

120

answers:

1

I have to send a sequence of video frames over UDP as fast and real-time as possible and while I got the basics working, I am running into all sorts of difficulties. Some of my goals:

  1. Data will normally be sent over dial-up (hence UDP instead of TCP), but also needs to support fast Ethernet.

  2. It's OK to occasionally drop frames (hence UDP instead of TCP).

  3. Need low latency. The frame the remote receives should be one that was recently sent (no more than a few frames waiting in buffers).

  4. I need to be able to detect the effective bandwidth so that I can compress the frames more or less to keep frame rate up.

I have managed to implement most of the pieces:

  1. I break up frame data into one or more datagrams of about 500 bytes and each has a sequence number and other info. The receiver reassembles the entire frame and detects if any datagrams are missing.

  2. If the receiver detects more than a certain percentage of dropped frames (e.g. 50% over the last 10 frames), I send a TCP message to the sender to slow down by 50%. Sender than slowly increases speed by 5% each subsequent frame.

  3. Using System.Net.Sockets.UdpClient to send and receive the data.

  4. I have a separate TCP channel used for control messages back to sender.

My main difficulty right now is detecting the effective bandwidth and dealing with latency, especially over dial-up (max ~4,000 bytes/sec). For example, if I try to send 100,000 bytes/second using TcpClient.Send() they ALL seem to arrive (no dropped datagrams) but with large latency by the time last datagram arrives. I think the TcpClient.Send() function is blocking until the buffer is able to send which messes up my current algorithm.

Can anybody point me to any sources of information for how to:

  1. Detect actual bandwidth over UDP.

  2. A better algorithm for dynamically adjusting bandwidth to suit the available pipe.

  3. Send data smoothly at the desired bandwidth.

  4. A way to detect and keep latency down to a minimum.

I have been spinning my wheels over the last week and every time I solve one problem it seems another rears up is head.

+2  A: 

You can also to add a timestamp to every packet. Then you can detect if the delay increase. In this case you send back a message to reduce the bandwidth.

On creating the connection you detect the latency with very few packets. This value should not change at running.

Horcrux7
I am sure that detecting the latency will require some kind of datagram timing. For example, I probably need to attach a timestamp, send to the receiver, the receiver then sends it back (perhaps using the TCP control channel). Find the original send time in some kind of history queue and the difference/2 is the aprox. latency. I can't find published algorithms to do this so I could implement efficiently.
Dan C
You don't even need to know the absolute latency. Since it's bandwidth that you're trying to solve for, you can just look at the inter-frame timing - if the frames were sent 40ms apart, but you're seeing them come in 90ms apart, then the sender needs to slow down.
caf
The problem with this is that, over time, small delays can add up to a large lag between when a frame is sent. I ended up sending an ACK for each complete frame received. At the sender I keep track of the last 10 frames sent, when they were sent, and when they were acknowledged. If the average time is too great, then I know to slow down.
Dan C
+1 since it was part of what I ended up doing.
Dan C