views:

570

answers:

9

I've been trying to optimize some extremely performance-critical code (a quick sort algorithm that's being called millions and millions of times inside a monte carlo simulation) by loop unrolling. Here's the inner loop I'm trying to speed up:

// Search for elements to swap.
while(myArray[++index1] < pivot) {}
while(pivot < myArray[--index2]) {}

I tried unrolling to something like:

while(true) {
    if(myArray[++index1] < pivot) break;
    if(myArray[++index1] < pivot) break;
    // More unrolling
}


while(true) {
    if(pivot < myArray[--index2]) break;
    if(pivot < myArray[--index2]) break;
    // More unrolling
}

This made absolutely no difference so I changed it back to the more readable form. I've had similar experiences other times I've tried loop unrolling. Given the quality of branch predictors on modern hardware, when, if ever, is loop unrolling still a useful optimization?

+11  A: 

Those wouldn't make any difference because you're doing the same number of comparisons. Here's a better example. Instead of:

for (int i=0; i<200; i++) {
  doStuff();
}

write:

for (int i=0; i<50; i++) {
  doStuff();
  doStuff();
  doStuff();
  doStuff();
}

Even then it almost certainly won't matter but you are now doing 50 comparisons instead of 200 (imagine the comparison is more complex).

Manual loop unrolling in general is largely an artifact of history however. It's another of the growing list of things that a good compiler will do for you when it matters. For example, most people don't bother to write x << 1 or x += x instead of x *= 2. You just write x *= 2 and the compiler will optimize it for you to whatever is best.

Basically there's increasingly less need to second-guess your compiler.

cletus
*Hand* loop unrolling is largely an artifact of history. Good compilers do it for you now.
dmckee
I agree, those days are over where you can tweak some loop here and there and expect huge benefit. Compilers are so advanced.
fastcodejava
I like it when the compiler optimizes `x *= 2` for me. I don't like it when it tries to reorganize my code. That includes loop unrolling, code lifting, eliding code that it thinks will never be reached, stuff like that. I'm perfectly capable of deciding when or when not to do those things.
Mike Dunlavey
@Mike Certainly turning optimization off if a good idea when puzzled, but it is worth reading the link that Poita_ posted. Compilers are getting *painfully* good at that business.
dmckee
@dmckee: I read it. They're getting painfully good at navel-gazing. There's what real optimization is about: http://stackoverflow.com/questions/926266/performance-optimization-strategies-of-last-resort/927773#927773
Mike Dunlavey
@Mike "I'm perfectly capable of deciding when or when not to do those things"... I doubt it, unless you're superhuman.
John
@John: I don't know why you say that; folks seem to think optimization is some kind of black art only compilers and good guessers know how to do. It all comes down to instructions and cycles and the reasons why they are spent. As I've explained many times on SO, it is easy to tell how and why those are being spent. If I've got a loop that has to use a significant percent of time, and it spends too many cycles in the loop overhead, compared to the content, I can see that and unroll it. Same for code hoisting. It doesn't take a genius.
Mike Dunlavey
I'm sure it's not that hard, but I still doubt you can do it as fast as the compiler does. What's the problem with the compiler doing it for you anyway? If you don't like it just turn optimizations off and burn your time away like it's 1990!
John
@John: Hey, why 1990? There's a relay computer that runs around 5 hz :-) Now that's when you really count cycles! Anyone who's spent many years writing assembler, as I have, is happy to have a compiler managing registers and optimizing simple stuff as well or better than they could. But scrambling code? in a way that confuses the debugger but usually doesn't help? They're doing it because they can and it's fun, and because one of the myths of performance is that it makes a significant difference.
Mike Dunlavey
A: 

Loop unrolling entirely depends on your problem size. It is entirely dependent on your algorithm being able to reduce the size into smaller groups of work. What you did above does not look like that. I am not sure if a monte carlo simulation can even be unrolled.

I good scenario for loop unrolling would be rotating an image. Since you could rotate separate groups of work. To get this to work you would have to reduce the number of iterations.

jwendl
I was unrolling a quick sort that gets called from the inner loop of my simulation, not the main loop of the simulation.
dsimcha
+4  A: 

Regardless of branch prediction on modern hardware, most compilers do loop unrolling for you anyway.

It would be worthwhile finding out how much optimizations your compiler does for you.

I found Felix von Leitner's presentation very enlightening on the subject. I recommend you read it. Summary: Modern compilers are VERY clever, so hand optimizations are almost never effective.

Peter Alexander
Nice read. Thanks.
dsimcha
Mike Dunlavey
A: 

Loop unrolling is still useful if there are a lot of local variables both in and with the loop. To reuse those registers more instead of saving one for the loop index.

In your example, you use small amount of local variables, not overusing the registers.

Comparison (to loop end) are also a major drawback if the comparison is heavy (i.e non-test instruction), especially if it depends on an external function.

Loop unrolling helps increasing the CPU's awareness for branch prediction as well, but those occur anyway.

LiraNuna
A: 

As far as I understand it, modern compilers already unroll loops where appropriate - an example being gcc, if passed the optimisation flags it the manual says it will:

Unroll loops whose number of iterations can be determined at compile time or upon entry to the loop.

So, in practice it's likely that your compiler will do the trivial cases for you. It's up to you therefore to make sure that as many as possible of your loops are easy for the compiler to determine how many iterations will be needed.

Rich Bradshaw
+9  A: 

Loop unrolling makes sense if you can break dependency chains. This gives a out of order or super-scalar CPU the possibility to schedule things better and thus run faster.

A simple example:

for (int i=0; i<n; i++)
{
  sum += data[i];
}

Here the dependency chain of the arguments is very short. If you get a stall because you have a cache-miss on the data-array the cpu cannot do anything but to wait.

Otoh this code:

for (int i=0; i<n; i+=4)
{
  sum1 += data[i+0];
  sum2 += data[i+1];
  sum3 += data[i+2];
  sum4 += data[i+3];
}
sum = sum1 + sum2 + sum3 + sum4;

could run faster. If you get a cache miss or other stall in one calculation there are still three other dependency chains that don't depend on the stall. A out of order CPU can execute these.

Nils Pipenbrinck
Thanks. I've tried loop unrolling in this style in several other places in the library where I'm calculating sums and stuff, and in these places it works wonders. I'm almost sure the reason is that it increases instruction level parallelism, as you suggest.
dsimcha
+1  A: 

Loop unrolling, whether it's hand unrolling or compiler unrolling, can often be counter-productive, particularly with more recent x86 CPUs (Core 2, Core i7). Bottom line: benchmark your code with and without loop unrolling on whatever CPUs you plan to deploy this code on.

Paul R
A: 

Trying without knowing is not the way to do it.
Does this sort take a high percentage of overall time?

All loop unrolling does is reduce the loop overhead of incrementing/decrementing, comparing for the stop condition, and jumping. If what you're doing in the loop takes more instruction cycles than the loop overhead itself, you're not going to see much improvement percentage-wise.

Here's an example of how to get maximum performance.

Mike Dunlavey
A: 

Loop unrolling can be helpful in specific cases. The only gain isn't skipping some tests!

It can for instance allow scalar replacement, efficient insertion of software prefetching... You would be surprised actually how useful it can be (you can easily get 10% speedup on most loops even with -O3) by aggressively unrolling.

As it was said before though, it depends a lot on the loop and the compiler and experiment is necessary. It's hard to make a rule (or the compiler heuristic for unrolling would be perfect)

Kamchatka