views:

65

answers:

3

The idea is that an existing project uses timeGetTime() (for windows targets) quite frequently.

milliseconds = timeGetTime();

Now, this could be replaced with

double tmp = (double) lpPerformanceCount.QuadPart/ lpFrequency.QuadPart; 
milliseconds = rint(tmp * 1000);

with lpPerformanceCount.QuadPart and lpFrequency.QuadPart being taken from the use of a single call to QueryPerformanceCounter() and QueryPerformanceFrequency().

I know Windows' internals are kind of voodoo, but can someone decipher which of the two is more accurate or/and has more overheads?

I suspect accuracy might be same but QueryPerformanceCounter might have less overheads. But I have no hard data to back it up.

Of course I wouldn't be surprised if the opposite is true.

If overheads are tiny in any way I would be more interested on whether there's any difference in accuracy.

+2  A: 

The accuracy of timeGetTime() is variable, based on the last used timeBeginPeriod. It will never be better than one millisecond. QueryPerformanceCounter is variable too, depending on hardware support. It will never be worse than about a microsecond.

Neither of them have notable overhead, QPC is probably a bit heavier. Whether that's significant to you is quite unclear from your question. I doubt it, but measure. With QPC.

Hans Passant
If microseconds are converted to milliseconds would it be more accurate?
Lela Dax
Well, that's a very deep question. I'll take the high road on that one: yes. There is no way that timing code execution down to the *microsecond* level on common operating systems will ever give you an accurate value. The last 4 digits are just noise, changing constantly when you repeat the timing test over and over again. So, yes, just throwing away the noise digits gives you a more stable number.
Hans Passant
Continued: more stable. But not more accurate. The relative error is about the same, a wee bit more for timing values in milliseconds. Very wee.
Hans Passant
+1  A: 

Be careful: QueryPerformanceCounter may be processor dependent. If your thread grabs the perf counter on one CPU, and ends up on another CPU before it grabs again, the results may not be reliable. See the MSDN entry.

Michael Kohne
That doesn't appear to have a clean solution since forcing something on 1 CPU is not good at all for performance, in common cases.
Lela Dax
@Michael: the MSDN entry you linked to says that this is only an issue with buggy HAL or BIOS. Funnily enough, "it works, unless there's a bug" is true for timegetTime as well. And for every other piece of software ever written.
jalf
@jalf: Buggy BIOSes are, unfortunately, rather common.
caf
@Lela: The problem is that the performance counters are something that low-end vendors will NOT necessarily test, and which don't show up much in production software. Therefore, bugs in them DO NOT GET FIXED. Take your chances as you will, but I avoid the perf counters on multi-core or CPU systems except for debugging (and then I'm careful).
Michael Kohne
+1  A: 

Accuracy is better on QPC. timeGetTime is accurate within the 1-10ms range (and its resolution is no finer than 1ms), whereas QPC can give you accuracy in the microsecond range.

The overhead varies. QPC uses the best hardware timer available. That may be some lightweight one built into the CPU, or it may have to go out to the motherboard which adds significant latency. And it might be made more expensive by having to go through a driver correcting for the timer hardware being buggy.

But neither is prohibitively expensive. If you're not going to call the timer millions of times per second, the overhead is insignificant for both.

jalf
But would it be more accurate even if it's converted to milliseconds?
Lela Dax
Maybe. Because then at least it'd give you the time to the nearest ms, which timeGetTime might not be able to do on all systems.But if you don't need the accuracy, and you don't need the resolution, and you're not calling it often enough for the performance to be critical, **why are you wasting your time worrying about which timer to use**? Every timer provided by the OS is good enough then, and you could have saved yourself several hours by *just picking a timer*.
jalf