What are the common algorithms being used to measure the processor frequency?


I'm not sure why you need assembly for this. If you're on a machine that has the /proc filesystem, then running:

> cat /proc/cpuinfo

might give you what you need.

+1  A: 

That was the intention of things like BogoMIPS, but CPUs are a lot more complicated nowadays. Superscalar CPUs can issue multiple instructions per clock, making any measurement based on counting clock cycles to execute a block of instructions highly inaccurate.

CPU frequencies are also variable based on offered load and/or temperature. The fact that the CPU is currently running at 800 MHz does not mean it will always be running at 800 MHz, it might throttle up or down as needed.

If you really need to know the clock frequency, it should be passed in as a parameter. An EEPROM on the board would supply the base frequency, and if the clock can vary you'd need to be able to read the CPUs power state registers (or make an OS call) to find out the frequency at that instant.

With all that said, there may be other ways to accomplish what you're trying to do. For example if you want to make high-precision measurements of how long a particular codepath takes, the CPU likely has performance counters running at a fixed frequency which are a better measure of wall-clock time than reading a tick count register.

+5  A: 

Intel CPUs after Core Duo support two Model-Specific registers called IA32_MPERF and IA32_APERF.
MPERF counts at the maximum frequency the CPU supports, while APERF counts at the actual current frequency.

The actual frequency is given by:

freq = max_frequency * APERF / MPERF

You can read them with this flow

; read MPERF
mov ecx, 0xe7
mov mperf_var_lo, eax
mov mperf_var_hi, edx

; read APERF
mov ecx, 0xe8
mov aperf_var_lo, eax
mov aperf_var_hi, edx

but note that rdmsr is a privileged instruction and can run only in ring 0.

I don't know if the OS provides an interface to read these, though their main usage is for power management, so it might not provide such an interface.

Nathan Fellman
cool! How did you make the equation so nice?
Nathan Fellman
I used this site to convert LaTeX code to GIF: LaTeX syntax for mathematics is described here:
Bastien Léonard
Oops, the first site should be
Bastien Léonard
+4  A: 

I'm gonna date myself with various details in this answer, but what the heck...

I had to tackle this problem years ago on Windows-based PCs, so I was dealing with Intel x86 series processors like 486, Pentium and so on. The standard algorithm in that situation was to do a long series of DIVide instructions, because those are typically the most CPU-bound single instructions in the Intel set. So memory prefetch and other architectural issues do not materially affect the instruction execution time -- the prefetch queue is always full and the instruction itself does not touch any other memory.

You would time it using the highest resolution clock you could get access to in the environment you are running in. (In my case I was running near boot time on a PC compatible, so I was directly programming the timer chips on the motherboard. Not recommended in a real OS, usually there's some appropriate API to call these days).

The main problem you have to deal with is different CPU types. At that time there was Intel, AMD and some smaller vendors like Cyrix making x86 processors. Each model had its own performance characteristics vis-a-vis that DIV instruction. My assembly timing function would just return a number of clock cycles taken by a certain fixed number of DIV instructions done in a tight loop.

So what I did was to gather some timings (raw return values from that function) from actual PCs running each processor model I wanted to time, and record those in a spreadsheet against the known processor speed and processor type. I actually had a command-line tool that was just a thin shell around my timing function, and I would take a disk into computer stores and get the timings off of display models! (I worked for a very small company at the time).

Using those raw timings, I could plot a theoretical graph of what timings I should get for any known speed of that particular CPU.

Here was the trick: I always hated when you would run a utility and it would announce that your CPU was 99.8 Mhz or whatever. Clearly it was 100 Mhz and there was just a small round-off error in the measurement. In my spreadsheet I recorded the actual speeds that were sold by each processor vendor. Then I would use the plot of actual timings to estimate projected timings for any known speed. But I would build a table of points along the line where the timings should round to the next speed.

In other words, if 100 ticks to do all that repeating dividing meant 500 Mhz, and 200 ticks meant 250 Mhz, then I would build a table that said that anything below 150 was 500 Mhz, and anything above that was 250 Mhz. (Assuming those were the only two speeds available from that chip vendor). It was nice because even if some odd piece of software on the PC was throwing off my timings, the end result would often still be dead on.

Of course now, in these days of overclocking, dynamic clock speeds for power management, and other such trickery, such a scheme would be much less practical. At the very least you'd need to do something to make sure the CPU was in its highest dynamically chosen speed first before running your timing function.

OK, I'll go back to shooing kids off my lawn now.

Tim Farley
I'd say that a speed of 99.8 could be accurate, they were *aiming* for 100Mhz but missed. You don't expect them to cull system chips that are a bit off timewise like you would wristwatches.
Arthur Kalliokoski

Thanks. The answers helped to understand some methods to perform the measurement.


A quick google on AMD and Intel shows that CPUID should give you access to the CPU`s max frequency.

I think it will only identify the processor model.
Bastien Léonard

"lmbench" provides a cpu frequency algorithm portable for different architecture.

It runs some different loops and the processor's clock speed is the greatest common divisor of the execution frequencies of the various loops.

this method should always work when we are able to get loops with cycle counts that are relatively prime.

+2  A: 

One way on x86 Intel CPU's since Pentium would be to use two samplings of the RDTSC instruction with a delay loop of known wall time, eg:

#include <stdio.h>
#include <stdint.h>
#include <unistd.h>

uint64_t rdtsc(void) {
    uint64_t result;
    __asm__ __volatile__ ("rdtsc" : "=A" (result));
    return result;

int main(void) {
    uint64_t ts0, ts1;    
    ts0 = rdtsc();
    ts1 = rdtsc();    
    printf("clock frequency = %llu\n", ts1 - ts0);
    return 0;

(on 32-bit platforms with GCC)

RDTSC is available in ring 3 if the TSC flag in CR4 is set, which is common but not guaranteed. One shortcoming of this method is that it is vulnerable to frequency scaling changes affecting the result if they happen inside the delay. To mitigate that you could execute code that keeps the CPU busy and constantly poll the system time to see if your delay period has expired, to keep the CPU in the highest frequency state available.