views:

589

answers:

4

HELP PLEASE! I have an application that needs as close to real-time processing as possible and I keep running into this unusual delay issue with both TCP and UDP. The delay occurs like clockwork and it is always the same length of time (mostly 15 to 16 ms). It occurs when transmitting to any machine (eve local) and on any network (we have two).

A quick run down of the problem:

I am always using winsock in C++, compiled in VS 2008 Pro, but I have written several programs to send and receive in various ways using both TCP and UDP. I always use an intermediate program (running locally or remotely) written in various languages (MATLAB, C#, C++) to forward the information from one program to the other. Both winsock programs run on the same machine so they display timestamps for Tx and Rx from the same clock. I keep seeing a pattern emerge where a burst of packets will get transmitted and then there is a delay of around 15 to 16 milliseconds before the next burst despite no delay being programmed in. Sometimes it may be 15 to 16 ms between each packet instead of a burst of packets. Other times (rarely) I will have a different length delay, such as ~ 47 ms. I always seem to receive the packets back within a millisecond of them being transmitted though with the same pattern of delay being exhibited between the transmitted bursts.

I have a suspicion that winsock or the NIC is buffering packets before each transmit but I haven't found any proof. I have a Gigabit connection to one network that gets various levels of traffic, but I also experience the same thing when running the intermediate program on a cluster that has a private network with no traffic (from users at least) and a 2 Gigabit connection. I will even experience this delay when running the intermediate program locally with the sending and receiving programs.

+2  A: 

There is always buffering involved and it varies between hardware/drivers/os etc. The packet schedulers also play a big role.

If you want "hard real-time" guarantees, you probably should stay away from Windows...

jldupont
Bear in mind that, for most purposes, a 15ms delay will go unnoticed. I have a 40ms ping to my ISP, and don't notice problems even when that doubles. An operating system designed for desktop use, like Windows or Mac OSX, doesn't have to worry about delays that small. I strongly agree with jldupont: if a 15ms delay is a problem, you really need to find an OS designed for hard real time use, since soft real time isn't cutting it for you. ("Soft real time" = everything runs fast enough; "hard real time" = system guarantees time slices.)
David Thornley
A: 

What you're probably seeing is a scheduler delay - your application is waiting for other process(s) to finish their timeslice and give up the CPU. Standard timeslices on multiprocessor Windows are from 15ms to 180ms.

You could try raising the priority of your application/thread.

caf
+3  A: 

I figured out the problem this morning while rewriting the server in Java. The resolution of my Windows system clock is between 15 and 16 milliseconds. That means that every packet that shows the same millisecond as its transmit time is actually being sent at different milliseconds in a 16 millisecond interval, but my timestamps only increment every 15 to 16 milliseconds so they appear the same.

I came here to answer my question and I saw the response about raising the priority of my program. So I started all three programs, went into task manager, raised all three to "real time" priority (which no other process was at) and ran them. I got the same 15 to 16 millisecond intervals.

Thanks for the responses though.

Logikal
you could use the QueryPerformanceCounter method (Stopwatch in .Net), to get much better accuracy (it's implemented by the hardware instead of Windows)
A: 

Hi Logikal seems that I have exactly the same problem described by you. Have you been successful in solving this issue ? how ? Thanks MKZ

MKZ