views:

502

answers:

6

When I'm writing some tight loop that needs to work fast I am often bothered by thoughts about how the processor branch prediction is going to behave. For instance I try my best to avoid having an if statement in the most inner loop, especially one with a result which is not somewhat uniform (say evaluates to true or false randomly).

I tend to do that because of the somewhat common knowledge that the processor pre-fetches instructions and if it turned out that it mis-predicted a branch then the pre-fetch is useless.

My question is - Is this really an issue with modern processors? How good can branch prediction expected to be?
What coding patterns can be used to make it better?

(For the sake of the discussion, assume that I am beyond the "early-optimization is the root of all evil" phase)

A: 

My answer is:

The reason AMD has been as fast or better than Intel at some points is the past is simply that they had better branch prediction.

If your code has no branch prediction, (Meaning it has no branches), then it can be expected to run faster.

So, conclusion: avoid branches if they're not necessary. If they are, try to make it so that one branch is evaluated 95% of the time.

Claudiu
+3  A: 

If we're beyond the "early optimization" phase, then surely we're beyond the "I can measure that" phase as well? With the crazy complexities of modern CPU architecture, the only way to know for sure is to try it and measure. Surely there can't be that many circumstances where you will have a choice of two ways to implement something, one of which requires a branch and one which doesn't.

Mark Ransom
+2  A: 

"(For the sake of the discussion, assume that I am beyond the "early-optimization is the root of all evil" phase)"

Excellent. Then you can profile your application's performance, use gcc's tags to make a prediction and profile again, use gcc's tags to make the opposite prediction and profile again.

Now imagine theoretically a CPU that prefetches both branch paths. And for subsequent if statements in both paths, it will prefetch four paths, etc. The CPU doesn't magically grow four times the cache space, so it's going to prefetch a shorter portion of each path than it would do for a single path.

If you find half of your prefetches being wasted, losing say 5% of your CPU time, then you do want to look for a solution that doesn't branch.

Windows programmer
Nuts. Those who wrote shorter answers got their answers written faster.
Windows programmer
A: 

Not exactly an answer, but you can find here an applet demonstrates the finite state machine often used for table-based branch-prediction in modern microprocessors.

It illustrates the use extra logic to generate a fast (but possibly wrong) estimate for the branch condition and target address.
The processor fetches and executes the predicted instructions at full speed, but needs to revert all intermediate results when the prediction turns out to having been wrong.

VonC
+5  A: 

Branch prediction is pretty darned good these days. But that doesn't mean the penalty of branches can be eliminated.

In typical code, you probably get well over 99% correct predictions, and yet the performance hit can still be significant. There are several factors at play in this.

One is the simple branch latency. On a common PC CPU, that might be in the order of 12 cycles for a mispredict, or 1 cycle for a correctly predicted branch. For the sake of argument, let's assume that all your branches are correctly predicted, then you're home free, right? Not quite.

The simple existence of a branch inhibits a lot of optimizations. The compiler is unable to reorder code efficiently across branches. Within a basic block (that is, a block of code that is executed sequentially, with no branches, one entry point and one exit), it can reorder instructions as it likes, as long as the meaning of the code is preserved, because they'll all be executed sooner or later. Across branches, it gets trickier. We could move these instructions down to execute after this branch, but then how do we guarantee they get executed? Put them in both branches? That's extra code size, that's messy too, and it doesn't scale if we want to reorder across more than one branch.

And a very similar issue occurs on the CPU. CPU's reorder instructions too, and executes them out of order, and again, this becomes trickier in the presence of branches. (There are a ways to do some reordering still, but it gets more complicated)

Branches can still be expensive, even with the best branch prediction. Not just because of mispredicts, but because instruction scheduling becomes so much harder.

This also implies that rather than the number of branches, the important factor is how much code goes in the block between them. A branch on every other line is bad, but if you can get a dozen lines into a block between branches, it's probably possible to get those instructions scheduled reasonably well, so the branch won't restrict the CPU or compiler too much.

But in typical code, branches are essentially free. In typical code, there aren't that many branches clustered closely together in performance-critical code.

jalf
The part about CPUs is wrong. The CPU doesn't care if an instruction is after a branch or not. It simply fetches and executes the path that is predicted to be taken, exactly like the "sure" part before the branch. The only difference is that instructions after a predicted branch are marked as speculative (so that the CPU can flush them if the prediction turns out to be wrong), and are held after execution before the retirement stage, until the prediction is confirmed to be correct. So the bottom line is, branches which are correctly predicted are in most cases essentially free.
slacker
A: 

One thing I recently found (on a TI DSP) is that trying to avoid branches can sometimes generate more code than the branch prediction cost.

I had something like the following in a tight loop:

if (var >= limit) { otherVar = 0;}

I wanted to get rid of the potential branch, and tried changing it to:

otherVar *= (var<limit)&1;

But the 'optimization' generated twice as much assembly and was actually slower.

AShelly