tags:

views:

5806

answers:

12

So I find myself porting a game that was originally written for the Win32 API to Linux (well, porting the OS X port of the Win32 port to Linux), and have implemented QueryPerformanceCounter by giving the uSeconds since the process start up:

BOOL QueryPerformanceCounter(LARGE_INTEGER* performanceCount)
{
    gettimeofday(&currentTimeVal, NULL);
    performanceCount->QuadPart = (currentTimeVal.tv_sec - startTimeVal.tv_sec);
    performanceCount->QuadPart *= (1000 * 1000);
    performanceCount->QuadPart += (currentTimeVal.tv_usec - startTimeVal.tv_usec);

    return true;
}

This, coupled with QueryPerformanceFrequency() giving a constant 1000000 as the frequency works well on my machine, giving me a 64 bit variable that contains uSeconds since the program's start up. So is this portable? I don't want to discover it works differently if the kernel was compiled in a certain way or anything like that. I am fine with it being non-portable to something other than Linux, however.

A: 

From my experience, and from what I've read across the internet, the answer is "No," it is not guaranteed. It depends on CPU speed, operating system, flavor of Linux, etc.

CodingWithoutComments
+1  A: 

Hmm. This is a quote from the Open Group:

The gettimeofday() function shall obtain the current time, expressed as seconds and microseconds since the Epoch, and store it in the timeval structure pointed to by tp. The resolution of the system clock is unspecified.

So it says microseconds explicitly, but says the resolution of the system clock is unspecified. I suppose resolution in this context means how the smallest amount it will ever be incremented? Does anyone know of a more reliable way?

Bernard
+18  A: 

Maybe. But you have bigger problems. gettimeofday() can result in incorrect timings if there are processes on your system that change the timer (ie, ntpd). On a "normal" linux, though, I believe the resolution of gettimeofday() is 10us. It can jump forward and backward and time, consequently, based on the processes running on your system. This effectively makes the answer to your question no.

You should look into clock_gettime(CLOCK_MONOTONIC) for timing intervals. It suffers from several less issues due to things like multi-core systems and external clock settings.

Also, look into the clock_getres() function.

lbrandy
clock_gettime is present only on newest Linux. other system have only gettimeofday()
vitaly.v.ch
@vitaly.v.ch it's POSIX so it's not Linux only and 'newist'? even 'Enterprise' distros like Red Hat Enterprise Linux are based on 2.6.18 which has clock_gettime so no, not very new.. (manpage date in RHEL is 2004-March-12 so it's been around for a while)unless you're talking about REALLY FREAKING OLD kernels WTF do you mean?
Spudd86
clock_gettime was included in POSIX in 2001. as far as I know currently clock_gettime() implemented in Linux 2.6 and qnx. but linux 2.4 is currently used in many production systems.
vitaly.v.ch
+2  A: 

The actual resolution of gettimeofday() depends on the hardware architecture. Intel processors as well as SPARC machines offer high resolution timers that measure microseconds. Other hardware architectures fall back to the system’s timer, which is typically set to 100 Hz. In such cases, the time resolution will be less accurate.

I obtained this answer from High Resolution Time Measurement and Timers, Part I

CodingWithoutComments
+16  A: 

High Resolution, Low Overhead Timing for Intel Processors

If you're on Intel hardware, here's how to read the CPU real-time instruction counter. It will tell you the number of CPU cycles executed since the processor was booted. This is probably the finest-grained counter you can get for performance measurement.

Note that this is the number of CPU cycles. On linux you can get the CPU speed from /proc/cpuinfo and divide to get the number of seconds. Converting this to a double is quite handy.

When I run this on my box, I get

11867927879484732
11867927879692217
it took this long to call printf: 207485

Here's the Intel developer's guide that gives tons of detail.

#include <stdio.h>
#include <stdint.h>

inline uint64_t rdtsc() {
    uint32_t lo, hi;
    __asm__ __volatile__ (
      "xorl %%eax, %%eax\n"
      "cpuid\n"
      "rdtsc\n"
      : "=a" (lo), "=d" (hi)
      :
      : "%ebx", "%ecx");
    return (uint64_t)hi << 32 | lo;
}

main()
{
    unsigned long long x;
    unsigned long long y;
    x = rdtsc();
    printf("%lld\n",x);
    y = rdtsc();
    printf("%lld\n",y);
    printf("it took this long to call printf: %lld\n",y-x);
}
Mark Harrison
Note that the TSC might not always be synchronized between cores, might stop or change its frequency when the processor enters lower power modes (and you have no way of knowing it did so), and in general is not always reliable. The kernel is able to detect when it is reliable, detect other alternatives like HPET and ACPI PM timer, and automatically select the best one. It's a good idea to always use the kernel for timing unless you are really sure the TSC is stable and monotonic.
CesarB
The TSC on Core and above Intel platforms is synchronized across multiple CPUs *and* increments at a constant frequency independent of power management states. See Intel Software Developer’s Manual, Vol. 3 Section 18.10. However the rate at which the counter increments is *not* the same as the CPU's frequency. The TSC increments at “the maximum resolved frequency of the platform, which is equal to the product of scalable bus frequency and maximum resolved bus ratio” Intel Software Developer’s Manual, Vol. 3 Section 18.18.5. You get those values from the CPU's model-specific registers (MSRs).
sstock
You can obtain the scalable bus frequency and maximum resolved bus ratio by querying the CPU’s model-specific registers (MSRs) as follows: Scalable bus frequency == MSR_FSB_FREQ[2:0] id 0xCD, Maximum resolved bus ratio == MSR_PLATFORM_ID[12:8] id 0x17. Consult Intel SDM Vol.3 Appendix B.1 to interpret the register values. You can use the msr-tools on Linux to query the registers. http://www.kernel.org/pub/linux/utils/cpu/msr-tools/
sstock
+1  A: 

@Mark Harrison:
I have to admit, most of your example went straight over my head. It does compile, and seems to work, though. Is this safe for SMP systems or SpeedStep?

EDIT:

From Wikipedia:

The RDTSC instruction has, until recently, been an excellent high-resolution, low-overhead way of getting CPU timing information. With the advent of multi-core/hyperthreaded CPUs, systems with multiple CPUs, and "hibernating" operating systems, RDTSC often no longer provides reliable results.

I would prefer reliable over super-high resolution.

Bernard
+3  A: 

So it says microseconds explicitly, but says the resolution of the system clock is unspecified. I suppose resolution in this context means how the smallest amount it will ever be incremented?

The data structure is defined as having microseconds as a unit of measurement, but that doesn't mean that the clock or operating system is actually capable of measuring that finely.

Like other people have suggested, gettimeofday() is bad because setting the time can cause clock skew and throw off your calculation. clock_gettime(CLOCK_MONOTONIC) is what you want, and clock_getres() will tell you the precision of your clock.

Joe Shaw
So what happens in your code when gettimeofday() jumps forward or backward with daylight savings?
mpez0
clock_gettime is present only on newest Linux. other system have only gettimeofday()
vitaly.v.ch
@mpez0 it doesn't
Spudd86
+7  A: 

@Bernard:

That's a good question... I think the code's ok. From a practical standpoint, we use it in my company every day, and we run on a pretty wide array of boxes, everything from 2-8 cores. Of course, YMMV, etc, but it seems to be a reliable and low-overhead (because it doesn't make a context switch into system-space) method of timing.

Generally how it works is:

  • declare the block of code to be assembler (and volatile, so the optimizer will leave it alone).
  • execute the CPUID instruction. In addition to getting some CPU information (which we don't do anything with) it synchronizes the CPU's execution buffer so that the timings aren't affected by out-of-order execution.
  • execute the rdtsc (read timestamp) execution. This fetches the number of machine cycles executed since the processor was reset. This is a 64-bit value, so with current CPU speeds it will wrap around every 194 years or so. Interestingly, in the original Pentium reference, they note it wraps around every 5800 years or so.
  • the last couple of lines store the values from the registers into the variables hi and lo, and put that into the 64-bit return value.

Specific notes:

  • out-of-order execution can cause incorrect results, so we execute the "cpuid" instruction which in addition to giving you some information about the cpu also synchronizes any out-of-order instruction execution.

  • Most OS's synchronize the counters on the CPUs when they start, so the answer is good to within a couple of nano-seconds.

  • The hibernating comment is probably true, but in practice you probably don't care about timings across hibernation boundaries.

  • regarding speedstep: Newer Intel CPUs compensate for the speed changes and returns an adjusted count. I did a quick scan over some of the boxes on our network and found only one box that didn't have it: a Pentium 3 running some old database server. (these are linux boxes, so I checked with: grep constant_tsc /proc/cpuinfo)

  • I'm not sure about the AMD CPUs, we're primarily an Intel shop, although I know some of our low-level systems gurus did an AMD evaluation.

Hope this satisfies your curiosity, it's an interesting and (IMHO) under-studied area of programming. You know when Jeff and Joel were talking about whether or not a programmer should know C? I was shouting at them, "hey forget that high-level C stuff... assembler is what you should learn if you want to know what the computer is doing!"

Mark Harrison
... The kernel people have been trying to get people to stop using rdtsc for a while... and generally avoid using it in the kernel because it's just that unreliable.
Spudd86
+6  A: 

Wine is actually using gettimeofday() to implement QueryPerformanceCounter() and it is known to make many Windows games work on Linux and Mac.

Starts http://source.winehq.org/source/dlls/kernel32/cpu.c#L312

leads to http://source.winehq.org/source/dlls/ntdll/time.c#L448

Vincent Robert
Thanks, this is very useful.
ttvd
+1  A: 

@Vincent:
I actually went source diving and saw that. All I've managed to do is confuse myself. :p

Well, at the moment I implemented it with clock_gettime(CLOCK_MONOTONIC) and left it at that.

Bernard
A: 

Reading the RDTSC is not reliable in SMP systems, since each CPU maintains their own counter and each counter is not guaranteed to by synchronized with respect to another CPU.

I might suggest trying clock_gettime(CLOCK_REALTIME). The posix manual indicates that this should be implemented on all compliant systems. It can provide a nanosecond count, but you probably will want to check clock_getres(CLOCK_REALTIME) on your system to see what the actual resolution is.

Doug
+4  A: 

You may be interested in Linux FAQ for clock_gettime(CLOCK_REALTIME)

David Schlosnagle