views:

781

answers:

2

When measuring network latency (time ack received - time msg sent) in any protocol over TCP, what timer would you recommend to use and why? What resolution does it have? What are other advantages/disadvantages?

Optional: how does it work?

Optional: what timer would you NOT use and why?

I'm looking mostly for Windows / C++ solutions, but if you'd like to comment on other systems, feel free to do so.

(Currently we use GetTickCount(), but it's not a very accurate timer.)

+2  A: 

You mentioned that you use GetTickCount(), so I'm going to recommend that you take a look at QueryPerformanceCounter().

John Dibling
+6  A: 

This is a copy of my answer from: C++ Timer function to provide time in nano seconds

For Linux (and BSD) you want to use clock_gettime().

#include <sys/time.h>

int main()
{
   timespec ts;
   // clock_gettime(CLOCK_MONOTONIC, &ts); // Works on FreeBSD
   clock_gettime(CLOCK_REALTIME, &ts); // Works on Linux
}

For windows you want to use the QueryPerformanceCounter. And here is more on QPC

Apparently there is a known issue with QPC on some chipsets, so you may want to make sure you do not have those chipset. Additionally some dual core AMDs may also cause a problem. See the second post by sebbbi, where he states:

QueryPerformanceCounter() and QueryPerformanceFrequency() offer a bit better resolution, but have different issues. For example in Windows XP, all AMD Athlon X2 dual core CPUs return the PC of either of the cores "randomly" (the PC sometimes jumps a bit backwards), unless you specially install AMD dual core driver package to fix the issue. We haven't noticed any other dual+ core CPUs having similar issues (p4 dual, p4 ht, core2 dual, core2 quad, phenom quad).

grieve