views:

103

answers:

2

I have two programs written in C++ that use Winsock. They both accept TCP connections and one sends data the other receives data. They are compiled in Visual Studio 2008. I also have a program written in C# that connects to both C++ programs and forwards the packets it receives from one and sends them to the other. In the process it counts and displays the number of packets forwarded. Also, the elapsed time from the first to the most recent packet is displayed.

The C++ program that sends packets simply loops 1000 times sending the exact same data. When I run all three apps on my development machine (using loopback or actual IP) the packets get run through the entire system in around 2 seconds. When I run all three on any other PC in our lab it always takes between 15 and 16 seconds. Each PC has different processors and amounts of memory but all of them run Windows XP Professional. My development PC actually has an older AMD Athlon with half as much memory as one of the machines that takes longer to perform this task. I have watched the CPU time graph in Task Manager on my machine and one other and neither of them is using a significant amount of the processor (i.e. more than 10%) while these programs run.

Does anyone have any ideas? I can only think to install Visual Studio on a target machine to see if it has something to do with that.

Problem Solved ====================================================

I first installed Visual Studio to see if that had any effect and it didn't. Then I tested the programs on my new development PC and it ran just as fast as my old one. Running the programs on a Vista laptop yielded 15 second times again.

I printed timestamps on either side of certain instructions in the server program to see which was taking the longest and I found that the delay was being caused by a Sleep() method call of 1 millisecond. Apparently on my old and new systems the Sleep(1) was being ignored because I would have anywhere from 10 to >20 packets being sent in the same millisecond. Occasionally I would have a break in execution of around 15 or 16 milliseconds which led to the the time of around 2 seconds for 1000 packets. On the systems that took around 15 seconds to run through 1000 packets I would have either a 15 or 16 millisecond gap between sending each packet.

I commented out the Sleep() method call and now the packets get sent immediately. Thanks for the help.

+2  A: 

You should profile your application on the good, 2 second case, and the 15 second lab case and see where they differ. The difference could be due to any number of a problems (disk, antivirus, network) - without any data backing it up we'd just be shooting in the dark.

If you don't have access to a profiler, you can add timing instrumentation to various phases of your program to see which phase is taking longer.

Michael
A: 

You could try checking the Winsock performance tuning registry settings - it may be the installation of some game or utility has tweaked those on your PC.

soru