views:

177

answers:

4

I have an server-application Foo that listens at a specific port and a client-application Bar which connects to Foo (both are .NET-apps).

Everything works fine. So far, so good?

But what happends when to Bar when the connection slows down or when it takes a long time until Foo responds? I have to test it.

My question is, how can I simulate such a slowdown?

Generally not that big problem (there are some free tools out there), but Foo and Bar are both running on productive machines (yes, there are developed on productive machines. I know, that's very bad, but believe me, that's not my decision). So I can't just use a tool that limits the whole bandwith of the network-adapters.

Is there a tool out there where I can limit the bandwith or delay the connection of a specific port? Is it possible to achieve this in .NET/C# so I can write a proper unit/integration-tests?

A: 

You could potentially put in sleep() commands in your code to slow it down, I don't think that this would be a good idea though.

It really depends on your network setup to be honest, my suggestion would be to do some QOS based traffic shaping, or similar.

Mez
+1  A: 

This question is based on a pre-existing assumption - that typical usage of your application will be over a slow link. Is this a valid assumption?

Maybe you should ask the following questions:

  1. Is this a TCP connection, intended to run over an unusually slow medium, such as dialup?

  2. Can you quantify the minimum acceptable throughput in order for the application to be a success?

  3. Is this connection of the highly-interactive variety (in which case latency becomes an issue, not just bandwidth)?

Yes, I'm questioning the assumption that's implicit in your question.

Assuming that you've answered the above questions, and you're therefore pretty satisfied about the metrics and success criteria for your application, and you still think that you need some kind of stress test to prove things out, then there are a couple of ways to go.

  1. Simulate a "slow connection" by using a tool. I know that the Linux traffic control stuff is pretty advanced and can simulate just about anything (see the LARTC)--if you really want to get flexible then set up a Linux virtual machine as a router and set your PC's default route to it. There are probably a myriad less-functional tools for Windows that can do similar types of things.

  2. Write a custom proxy application that accepts a TCP connection, and does a "pass through", with custom Thread.Sleep's according to some profile that you choose. That would do a reasonable job of simulating a flaky TCP connection, but is somewhat unscientific (the TCP back-off algorithms are a little hairy and difficult to accurately simulate).

Eric Smith
+1  A: 

I've used http://netlimiter.com/ for testing this kind of stuff. Some tools even let you simulate packet loss.

AaronLS
+1  A: 

In my job I sometimes have to test transfer code over slow/unreliable links. The best free way to do this that I've found is to use the dummynet module within FreeBSD. I set up a few VMs, with a freebsd box between them acting as a transparent bridge. I use dummynet to munge the traffic going across the bridge to simulate whatever latency and packet loss I want.

I did a write-up about it on my blog a while back, titled Simulating Slow WAN Links with Dummynet and VMWare ESX. It should also be doable with VMWare Workstation or another virtualization product as long as you can control how the network interfaces operate.

anelson