tags:

views:

95

answers:

3

I'm currently profiling an implementation of binary search. Using some special instructions to measure this I noticed that the code has about a 20% misprediction rate. I'm curious if there is any way to check how many cycles I'm potentially losing due to this. It's a MIPS based architecture.

A: 

Look at your specs for that info and if that fails, run it a billion times and time it external to your program (stop watch of something.) Then run it with without a miss and compare.

Michael Dorgan
+4  A: 

You're losing 0.2 * N cycles per iteration, where N is the number of cycles that it takes to flush the pipelines after a mispredicted branch. Suppose N = 10 then that means you are losing 2 clocks per iteration on aggregate. Unless you have a very small inner loop then this is probably not going to be a significant performance hit.

Paul R
Well ... if you have lots of 20% mispredicted branches per iteration the time could start to add up ;)
Goz
@Goz: yes, good point - I guess when I say *loop* or *iteration* I really mean a block with one partly predictable branch in it, but there could be more than one such block in a loop.
Paul R
Thats the thing. cycles add up fast and are more precious when you only have 300 million a second.
Matt Wamboldt
+1  A: 

Look it up in the docs for your CPU. If you can't find this information specifically, the length of the CPU's pipeline is a fairly good estimate.

Given that it's MIPS and it's a 300MHz system, I'm going to guess that it's a fairly short pipeline. Probably 4-5 stages, so a cost of 3-4 cycles per mispredict is probably a reasonable guess.

jalf