views:

384

answers:

1

I am trying to limit my application send rate to 900kbps but the problem is that the protocol I use is message oriented and the messages have very different sizes. I can have messages from 40 bytes all the way up to 125000 bytes and all messages are send as atomic units.

I tried implementing a token bucket buffer but if I set a low bucket size the big packets never get send and a larger bucket will result in a large burst with no rate limiting at all.

This is my small implementation in C:

typedef struct token_buffer {
  size_t capacity;
  size_t tokens;
  double rate;
  uint64_t timestamp;

} token_buffer;


static uint64_t time_now()
{
  struct timeval ts;
  gettimeofday(&ts, NULL);
  return (uint64_t)(ts.tv_sec * 1000 + ts.tv_usec/1000);
}

static int token_buffer_init(token_buffer *tbf, size_t max_burst, double rate)
{
  tbf->capacity = max_burst;
  tbf->tokens   = max_burst;
  tbf->rate = rate;
  tbf->timestamp = time_now();
}

static size_t token_buffer_consume(token_buffer *tbf, size_t bytes)
{
  // Update the tokens
  uint64_t now = time_now();
  size_t delta = (size_t)(tbf->rate * (now - tbf->timestamp));
  tbf->tokens = (tbf->capacity < tbf->tokens+delta)?tbf->capacity:tbf->tokens+delta;
  tbf->timestamp = now;

  fprintf(stdout, "TOKENS %d  bytes: %d\n", tbf->tokens, bytes);

  if(bytes <= tbf->tokens) {
    tbf->tokens -= bytes;
  } else {
    return -1;
  }

  return 0;
}

Then somewhere in main():

while(1) {
  len = read_msg(&msg, file);

  // Loop until we have enough tokens.
  // if len is larger than the bucket capacity the loop never ends.
  // if the capacity is too large then no rate limit occurs.
  while(token_buffer_consume(&tbf,msg, len) != 0) {}

  send_to_net(&msg, len);
}
+2  A: 

You are limiting your maximum message size by max_burst (which gets assigned to tbf->capacity in the beginning) - since the tbf->tokens never increments beyond that value, bigger messages will never get sent due to this check:

if(bytes <= tbf->tokens) {
    tbf->tokens -= bytes;
  } else {
    return -1;
  }

So, the code indeed sets a hard limit on burst to be max_burst - so you should fragment your messages if you want this burst size.

Assuming this is the only place in the code where you can insert the limiter, you might get a better result if you replace the above piece with:

if(tbf->tokens > 0) {
  tbf->tokens -= bytes;
} else {
  return -1;
}

The semantic will be slightly different, but on average over a long period of time it should get you approximately the rate you are looking for. Of course, if you send 125K in one message over a 1gbps link, one can hardly talk about 900kbps rate - it will be full 1gbps burst of packets, and they will need to be queued somewhere in case there are lower-speed links - hence be prepared to lose some of the packets in that case.

But, depending on your application and the transport network protocol that you are using (TCP/UDP/SCTP/...?) you might want to move the shaping code down the stack - because packets on the network typically are only maximum 1500 bytes anyway (that includes various network/transport protocol headers)

One thing which might be interesting for testing is http://www.linuxfoundation.org/en/Net:Netem - if your objective is trying to tackle the smaller-capacity links. Or, grab a couple of older routers with 1mbps serial ports connected back to back.

Andrew Y