views:

191

answers:

2

I am trying to find a reference for approximately how many CPU cycles various operations require.

I don't need exact numbers (as this is going to vary between CPUs) but I'd like something relatively credible that gives ballpark figures that I could cite in discussion with friends.

As an example, we all know that floating point division takes more CPU cycles than say doing a bitshift.

I'd guess that the difference is that the division is around 100 cycles, where as a shift is 1 but I'm looking for something to cite to back that up.

Can anyone recommend such a resource?

A: 

This is going to be hardware-dependent. The best thing to do is to run some benchmarks on the particular hardware you want to test.

A benchmark would go roughly like this:

  • Run a primitive operation a million times (say, adding two integers)
  • Record the time it took to run (say, in seconds)
  • Multiply by the number of cycles your machine executes per second - this will give you the total number of cycles spent.
  • Divide 1000000 by the number from the previous step - this will give you the number of instructions per cycle. Keep in mind that with pipelining, this could be less than 1.
danben
+1  A: 

For x86 processors, see Intel® 64 and IA-32 Architectures Optimization Reference Manual ,probably Appendix C, available here. http://www.intel.com/products/processor/manuals/

However, it's not in any way easy to figure out how many cycles an instruction takes to execute on a modern x86 processor, as it depends too much on e.g. accessing data in cache,aligned access, whether branch prediction fails, if there's a stall in the instruction pipeline and quite a lot of other things.

nos