views:

1965

answers:

4

I'm writing a client-server app using BSD sockets. It needs to run in the background, continuously transferring data, but cannot hog the bandwidth of the network interface from normal use. Depending on the speed of the interface, I need to throttle this connection to a certain max transfer rate.

What is the best way to achieve this, programmatically?

A: 

If you want to limit the data rate to say 100 kB/s, you can do something like this:

while (data left to send) {
    send 100 kB of data
    wait 1 second
}

This won't send data at a rate any faster than what you intend.

Greg Hewgill
+8  A: 

The problem with sleeping a constant amount of 1 second after each transfer is that you will have choppy network performance.

Let BandwidthMaxThreshold be the desired bandwidth threshold.

Let TransferRate be the current transfer rate of the connection.

Then...

If you detect your TransferRate > BandwidthMaxThreshold then you do a SleepTime = 1 + SleepTime * 1.02 (increase sleep time by 2%)

Before or after each network operation do a Sleep(SleepTime)

If you detect your TransferRate is a lot lower than your BandwidthMaxThreshold you can decrease your SleepTime. Alternatively you could just decay/decrease your SleepTime over time always. Eventually your SleepTime will reach 0 again.

Instead of an increase of 2% you could also do an increase by a larger amount linearly of the difference between TransferRate - BandwidthMaxThreshold.

This solution is good, because you will have no sleeps if the user's network is already not as high as you would like.

Brian R. Bondy
@Brian: Why is that '1 +' necessary? Won't SleepTime = 1.02 * SleepTime by itself increase the value by 2%?
sundar
I just added it so that if your SleepTime gets to 0 it will be able to grow again. Also so that it will always grow by at least 1 when it has to grow.
Brian R. Bondy
+4  A: 

I've had good luck with trickle. It's cool because it can throttle arbitrary user-space applications without modification. It works by preloading its own send/recv wrapper functions which do the bandwidth calculation for you.

The biggest drawback I found was that it's hard to coordinate multiple applications that you want to share finite bandwidth. "trickled" helps, but I found it complicated.

Chris Dolan
+3  A: 

The best way would be to use a token bucket.

Transmit only when you have enough tokens to fill a packet (1460 bytes would be a good amount), or if you are the receive side, read from the socket only when you have enough tokens; a bit of simple math will tell you how long you have to wait before you have enough tokens, so you can sleep that amount of time (be careful to calculate how many tokens you gained by how much you actually slept, since most operating systems can sleep your process for longer than you asked).

To control the size of the bursts, limit the maximum amount of tokens you can have; a good amount could be one second worth of tokens.

CesarB