views:

219

answers:

2

I am looking for ways to perform micro-benchmarks on multi-core processors.

Context:

At about the same time desktop processors introduced out-of-order execution that made performance hard to predict, they, perhaps not coincidentally, also introduced special instructions to get very precise timings. Example of these instructions are rdtsc on x86 and rftb on PowerPC. These instructions gave timings that were more precise than could ever be allowed by a system call, allowed programmers to micro-benchmark their hearts out, for better or for worse.

On a yet more modern processor with several cores, some of which sleep some of the time, the counters are not synchronized between cores. We are told that rdtsc is no longer safe to use for benchmarking, but I must have been dozing off when we were explained the alternative solutions.

Question:

Some systems may save and restore the performance counter and provide an API call to read the proper sum. If you know what this call is for any operating system, please let us know in an answer.

Some systems may allow to turn off cores, leaving only one running. I know Mac OS X Leopard does when the right Preference Pane is installed from the Developers Tools. Do you think that this make rdtsc safe to use again?

More context:

Please assume I know what I am doing when trying to do a micro-benchmark. If you are of the opinion that if an optimization's gains cannot be measured by timing the whole application, it's not worth optimizing, I agree with you, but

  1. I cannot time the whole application until the alternative data structure is finished, which will take a long time. In fact, if the micro-benchmark were not promising, I could decide to give up on the implementation now;

  2. I need figures to provide in a publication whose deadline I have no control over.

+2  A: 

On OSX (ARM, Intel and PowerPC), you want to use mach_absolute_time( ):

#include <mach/mach_time.h>
#include <stdint.h>    

// Utility function for getting timings in nanoseconds.
double machTimeUnitsToNanoseconds(uint64_t mtu) {
    static double mtusPerNanosecond = 0.0;
    if (0.0 == mtusPerNanosecond) {
        mach_timebase_info_data_t info;
        if (mach_timebase_info(&info)) {
            // Handle an error gracefully here, whatever that means to you.
            // If you do get an error, something is seriously wrong, so
            // I generally just report it and exit( ).
        }
        mtusPerNanosecond = (double)info.numer / info.denom;
    }
    return mtu * mtusPerNanosecond;
}

// In your code:
uint64_t startTime = mach_absolute_time( );
// Stuff that you want to time.
uint64_t endTime = mach_absolute_time( );
double elapsedNanoseconds = machTimeUnitsToNanoseconds(endTime - startTime);

Note that there's no need to limit to one core for this. The OS handles the fix-up required behind the scenes for mach_absolute_time( ) to give meaninful results in a multi-core (and multi-socket) environment.

Stephen Canon
Thanks, I should be able to work it out from http://developer.apple.com/mac/library/qa/qa2004/qa1398.html , although I am very disappointed at the result of `man mach_absolute_time`.
Pascal Cuoq
@Pascal: That would be a good bug to report. I posted some sample code that avoids the pointer casting in that note.
Stephen Canon
+1  A: 

The cores are returning the correct synced values for "rtdsc". If you have a multisocket machine you have to fix the processe to one socket. This is not the problem.

The main problem is that the scheduler is making the data unreliable. There is some performance API for Linux Kernel > 2.6.31 but i haven't looked at it. Windows > Vista is doing a great job here, use QueryThreadCycleTime and QueryProcessCycleTime.

I'm not sure about OSX but AFAIK "mach_absolute_time" does not adjust the scheduled time.

Lothar