views:

1094

answers:

10

In a program I wrote, 20% of the time is being spent on finding out the minimum of 3 numbers in an inner loop, in this routine:

static inline unsigned int
min(unsigned int a, unsigned int b, unsigned int c)
{
    unsigned int m = a;
    if (m > b) m = b;
    if (m > c) m = c;
    return m;
}

Is there any way to speed this up? I am ok with assembly code too for x86/x86_64.

Edit: In reply to some of the comments:
* Compiler being used is gcc 4.3.3
* As far as assembly is concerned, I am only a beginner there. I asked for assembly here, to learn how to do this. :)
* I have a quad-core Intel 64 running, so MMX/SSE etc. are supported.
* It's hard to post the loop here, but I can tell you it's a heavily optimized implementation of the levenshtein algorithm.

This is what the compiler is giving me for the non-inlined version of min:

.globl min
    .type   min, @function
min:
    pushl   %ebp
    movl    %esp, %ebp
    movl    8(%ebp), %edx
    movl    12(%ebp), %eax
    movl    16(%ebp), %ecx
    cmpl    %edx, %eax
    jbe .L2
    movl    %edx, %eax
.L2:
    cmpl    %ecx, %eax
    jbe .L3
    movl    %ecx, %eax
.L3:
    popl    %ebp
    ret
    .size   min, .-min
    .ident  "GCC: (Ubuntu 4.3.3-5ubuntu4) 4.3.3"
    .section    .note.GNU-stack,"",@progbits

The inlined version is within -O2 optimized code (even my markers mrk = 0xfefefefe, before and after the call to min()) are getting optimized away by gcc, so I couldn't get hold of it.

Update: I tested the changes suggested by Nils, ephemient, however there's no perceptible performance boost I get by using the assembly versions of min(). However, I get a 12.5% boost by compiling the program with -march=i686, which I guess is because the whole program is getting the benefits of the new faster instructions that gcc is generating with this option. Thanks for your help guys.

P.S. - I used the ruby profiler to measure performance (my C program is a shared library loaded by a ruby program), so I could get time spent only for the top-level C function called by the ruby program, which ends up calling min() down the stack. Please see this question.

+10  A: 

Assuming your compiler isn't out to lunch, this should compile down to two compares and two conditional moves. It isn't possible to do much better than that.

If you post the assembly that your compiler is actually generating, we can see if there's anything unnecessary that's slowing it down.

The number one thing to check is that the routine is actually getting inlined. The compiler isn't obligated to do so, and if it's generating a function call, that will be hugely expensive for such a simple operation.

If the call really is getting inlined, then loop unrolling may be beneficial, as DigitalRoss said, or vectorization may be possible.

Edit: If you want to vectorize the code, and are using a recent x86 processor, you will want to use the SSE4.1 pminud instruction (intrinsic: _mm_min_epu32), which takes two vectors of four unsigned ints each, and produces a vector of four unsigned ints. Each element of the result is the minimum of the corresponding elements in the two inputs.

I also note that your compiler used branches instead of conditional moves; you should probably try a version that uses conditional moves first and see if that gets you any speedup before you go off to the races on a vector implementation.

Stephen Canon
+1 My guess is that any gains will come from the outer context, versus this function.
Michael Easter
The outer context is heavily optimized. It does computations over a database of 2.88 million strings. Before optimizations it used to give results in 4 seconds. After a week of heavy optimizations, this is down to 150 ms. Latest profile run turns up "min" on top with 20% time spent there.
Sudhanshu
My only comment would be looking into what's calling min all the time, and see if you can save calls to min itself.
sharth
Loop unrolling is one of the optimizations already there, along with several others. The routine is getting inlined, I can't find the "min" symbol in disassembled code. I am intrigued about the vectorization bit - maybe should go and read up on that. Thanks.
Sudhanshu
A: 

If you are only doing one comparison you might want to unroll the loop manually.

First, see if you can get the compiler to unroll the loop for you, and if you can't, do it yourself. This will at least reduce the loop control overhead...

DigitalRoss
A: 

You could try something like this to save on declaration and unnecessary comparisons:

static inline unsigned int
min(unsigned int a, unsigned int b, unsigned int c)
{ 
    if (a < b)
    {
        if (a < c) 
             return a; 
        else 
             return c;
    }

    if (b < c)
        return b;
    else return c;
}
Paul Sasik
I doubt this'll be much better - the initial assignment is going to be turned into a rename in the compiler anyway, and now there's three branches taking up space in the branch predictor, not two.
bdonlan
This is two comparisons either way. The difference now is that you're branching instead of using conditional moves - I would guess this is likely to be slower. Even ignoring that you're tubing the pipeline.
Anon.
I think this computes the max of 3 inputs, not the min. At least for a = 5, b = 2, c = 3
Michael Easter
Be careful here. Now there are extra branches and the resulting code is larger, both of which have their own downsides. (Also, this is max but it's clear what you meant.)
Jason
@Anon: The compiler's not guaranteed to use conditional moves in the former case - but I agree, it's certainly going to have difficulty doing it here.
bdonlan
@Anon: the compiler is free to branch or use conditional moves in either case. (I suspect most compilers will emit conditional moves for both)
Stephen Canon
The original code always ends up with at least two comparisons and two assignmetns (possibly three assignments.) My version should result in a max of two comparisons. Now... i realize it's actually more code. This snippet as an alternative that MIGHT be executed faster or not. It's only a few lines of code that might be worth testing...
Paul Sasik
You always need two comparisons, since you have to examine at least two pairs of variables. If you look closely, both versions actually always do exactly two comparisons as well. And modern compilers will rename variables to eliminate the cost of local assignments in cases like this - I highly doubt it'll be any better...
bdonlan
Assignments are cheap. Seriously. Unless you have to hit memory, they're far cheaper than a missed branch.
Anon.
A: 

Yes, post assembly, but my naive optimization is:

static inline unsigned int
min(unsigned int a, unsigned int b, unsigned int c)
{
    unsigned int m = a;
    if (m > b) m = b;
    if (m > c) return c;
    return m;
}
Hamish Grubijan
Transformations of this nature can be done by just about any compiler (and it's non-trivial to say which form would be more efficient!)
bdonlan
+1  A: 

First, look at the disassembly. That'll tell you a lot. For example, as written, there are 2 if-statements (which means there are 2 possible branch mispredictions), but my guess is that a decent modern C compiler will have some clever optimization that can do it without branching. I'd be curious to find out.

Second, if your libc has special built-in min/max functions, use them. GNU libc has fmin/fmax for floating-point, for example, and they claim that "On some processors these functions can use special machine instructions to perform these operations faster than the equivalent C code". Maybe there's something similar for uints.

Finally, if you're doing this to a bunch of numbers in parallel, there are probably vector instructions to do this, which could provide significant speedup. But I've even seen non-vector code be faster when using vector units. Something like "load one uint into a vector register, call vector min function, get result out" looks dumb but might actually be faster.

Ken
Thanks for your pointers Ken - I'll definitely check out the vector instructions, that I think Mark and Stephen are referring to as well.
Sudhanshu
+4  A: 

The SSE2 instruction extensions contain an integer min instruction that can choose 8 minimums at a time. See _mm_mulhi_epu16 in http://www.intel.com/software/products/compilers/clin/docs/ug_cpp/comm1046.htm

Mark Ransom
`_mm_mulhi_epu16` is an intrinsic for a vector 16 bit multiply high instruction -- not useful for computing a minimum of 32 bit integers. The intrinsic you actually want is `_mm_min_epu32`.
Stephen Canon
+5  A: 

My take on an x86 assembler implementation, GCC syntax. Should be trivial to translate to another inline assembler syntax:

int inline least (int a, int b, int c)
{
  int result;
  __asm__ ("mov     %1, %0\n\t"
           "cmp     %0, %2\n\t" 
           "cmovle  %2, %0\n\t"
           "cmp     %0, %3\n\t"
           "cmovle  %3, %0\n\t" 
          : "=r"(result) : 
            "r"(a), "r"(b), "r"(c)
          );
  return result;
}

New and improved version:

int inline least (int a, int b, int c)
{
  __asm__ (
           "cmp     %0, %1\n\t" 
           "cmovle  %1, %0\n\t"
           "cmp     %0, %2\n\t"
           "cmovle  %2, %0\n\t" 
          : "+r"(a) : 
            "%r"(b), "r"(c)
          );
  return a;
}

NOTE: It may or may not be faster than C code.

This depends on a lot of factors. Usually cmov wins if the branches are not predictable (on some x86 architectures) OTOH inline assembler is always a problem for the optimizer, so the optimization penalty for the surrounding code may outweight all gains..

Btw Sudhanshu, it would be interesting to hear how this code performs with your testdata.

Nils Pipenbrinck
Does this work for unsigned integer comparisons as well? Sorry, if this sounds naive.
Sudhanshu
Whoops, I didn't see this before writing my own. Yes, you can do this unsigned; just change `cmovle` to `cmovbe`.
ephemient
As mentioned in my reply below, GCC does this optimization automatically once you pass in an appropriate `-march` flag - it's just that it's not in the instruction set of the original 80386, and GCC errs on the side of (extreme) caution :)
bdonlan
Nils, ephemient, bdonlan - all of these suggestions look good. Lemme get back to you with the results by tomorrow. Thanks for the help.
Sudhanshu
GCC does not do this optimization anymore. The optimization is still in GCC but it is disabled. the branching version is used instead. Reason: The compiler has a hard time to guess if a branch is predictable or not, and to make sure the branch prediction gets used it does not use cmovcc.
Nils Pipenbrinck
No, bdonlan is right. I didn't use `-march=` when I was testing earlier, but on my AMD Phenom, `-march=native` does use CMOV. (On the other hand, `-march=core2` doesn't.)
ephemient
+4  A: 

This drop-in replacement clocks in about 1.5% faster on my AMD Phenom:

static inline unsigned int
min(unsigned int a, unsigned int b, unsigned int c)
{
    asm("cmp   %1,%0\n"
        "cmova %1,%0\n"
        "cmp   %2,%0\n"
        "cmova %2,%0\n"
        : "+r" (a) : "r" (b), "r" (c));
    return a;
}

Results may vary; some x86 processors don't handle CMOV very well.

ephemient
Nice.. better than my example. You can add a %-modifier for the b for some extra flexibility in the register allocation.
Nils Pipenbrinck
GCC will do this automatically with a proper `-march` setting, which will also help in other parts of the code.
bdonlan
+5  A: 

Make sure you are using an appropriate -march setting, first off. GCC defaults to not using any instructions that were not supported on the original i386 - allowing it to use newer instruction sets can make a BIG difference at times! On -march=core2 -O2 I get:

min:
    pushl   %ebp
    movl    %esp, %ebp
    movl    8(%ebp), %edx
    movl    12(%ebp), %ecx
    movl    16(%ebp), %eax
    cmpl    %edx, %ecx
    leave
    cmovbe  %ecx, %edx
    cmpl    %eax, %edx
    cmovbe  %edx, %eax
    ret

The use of cmov here may help you avoid branch delays - and you get it without any inline asm just by passing in -march. When inlined into a larger function this is likely to be even more efficient, possibly just four assembly operations. If you need something faster than this, see if you can get the SSE vector operations to work in the context of your overall algorithm.

bdonlan
+1 for the -march suggestion. I get a boost of 12.5% by just using that. :)
Sudhanshu
A: 

These are all good answers. At the risk of being accused of not answering the question, I would also look at the other 80% of the time. Stackshots are my favorite way to find code worth optimizing, especially if it is function calls that you find out you don't absolutely need.

Mike Dunlavey