views:

78

answers:

3

I'm writing an application and I'm able to set its throughput (the number of bits per second it sends over the wire) to whatever rate I wish. However, I would like to set it as high as possible, as long as other traffic on the network is not heavily impacted.

The problem is, I don't have a good metric to measure that impact. I thought of the following ones, but neither is really "complete":

  1. Increase in average delay time for a packet
  2. Increase in packet loss
  3. Increase in jitter
  4. Increase in the average time it takes for tcp transactions to complete (downloading files using http)

Is there any standard metric? Do you have any other ideas on how to measure an application impact on the network?

btw - I have a complete control on the network, and can take whatever measurement that I want in order to compute that metric.

Thanks,

Rouli

A: 

Traffic Engineering is a pretty complex field. Quality of Service is probably a good starting point for this problem.

Hank Gay
A: 

This is one of those questions it might be hard to answer programmatically. In apps that I've seen allow this sort of throttling, it's always been a configuration option. It's generally just too hard to know about your user's network, any assumptions you make will probably be wrong.

mercan01
A: 

Different networks behave in different ways as you exceed their bandwidth. Most of them have a succession of badness along the lines:

  1. Jitter will begin to shoot through the roof as some packets have to be queued or retransmitted (e.g., collisions on half-duplex ethernet or wireless). Average latency will go up slightly.
  2. As oversaturation continues (or at higher oversaturation levels) average latency will go through the roof as pretty much all packets are being queued or retransmitted. This may be limited if queue sizes are small.
  3. Packet loss will increase as queues overflow. The higher you drive the bandwidth, the more packets will be lost. Depending on hardware, jitter and latency may or may not go back down.

If some form of QoS is in use, different packet streams may see these effects independently. E.g., you may be pumping 3x bandwidth on your app connection and see relatively little change in ping time. So you must measure with your application's packets.

(1) and (2) may not occur on a given network. (3) will always occur, no matter what. All three can, unfortunately, also occur even when you're nowhere near the bandwidth limit.

derobert