+4  A: 

How about this:

if (y & 16) x <<= 16;
if (y & 8) x <<= 8;
if (y & 4) x <<= 4;
if (y & 2) x <<= 2;
if (y & 1) x <<= 1;

will probably take longer yet to execute but easier to interleave if you have other code to go between.

Joshua
The question specified no branches!
Norman Ramsey
Yeah but what he's saying can be achieved with a branchless conditional-move op -- I get what he's trying to communicate.
Crashworks
Oh, the predicated instructions are a total game-changer. Joshua is saved from a nasty downvote! How does it perform?
Norman Ramsey
I don't think that code is right. Those bit tests should be with powers of 2.
MSN
+1  A: 

This one breaks my head. I've now discarded a half dozen ideas. All of them exploit the notion that adding a thing to itself shifts left 1, doing the same to the result shifts left 4, and so on. If you keep all the partial results for shift left 0, 1, 2, 4, 8, and 16, then by testing bits 0 to 4 of the shift variable you can get your initial shift. Now do it again, once for each 1 bit in the shift variable. Frankly, you might as well send your processor out for coffee.

The one place I'd look for real help is Hank Warren's Hacker's Delight (which is the only useful part of this answer).

Norman Ramsey
Yeah, I ran into the same wall that you did. However I find the phrase "you might as well send your processor out for coffee" utterly delightful and will use it at every possible excuse henceforth. =)
Crashworks
A: 

How about this:

int[] multiplicands = { 1, 2, 4, 8, 16, 32, ... etc ...};

int ShiftByVar( int x, int y )
{
    //return x << y;
    return x * multiplicands[y];
}
ChrisW
sadly, multiply is pretty slow too. =(
Crashworks
+3  A: 

Let's assume that your max shift is 31. So the shift amount is a 5-bit number. Because shifting is cumulative, we can break this into five constant shifts. The obvious version uses branching, but you ruled that out.

Let N be a number between 1 and 5. You want to shift x by 2**N if the bit whose value is 2**N is set in y, otherwise keep x intact. Here one way to do it:

#define SHIFT(N) x = isel(((y >> N) & 1) - 1, x << (1 << N), x);

The macro assigns to x either x << 2**N or x, depending on whether the Nth bit is set in y or not.

And then the driver:

SHIFT(1); SHIFT(2); SHIFT(3); SHIFT(4); SHIFT(5)

Note that N is a macro variable and becomes constant.

Don't know though if this is going to be actually faster than the variable shift. If it would be, one wonders why the microcode wouldn't run this instead...

antti.huima
That's interesting -- I'll try it out in the simulator. The microcoded op definitely works by substituting itself with some sequence of other not-microcoded ops and then running them instead; the problem is it's not pipelined, so I'm trying to figure out what that magic sequence of μops is.
Crashworks
MSN
A: 

There is some good stuff here regarding bit manipulation black magic: Advanced bit manipulation fu (Christer Ericson's blog)

Don't know if any of it's directly applicable, but if there is a way, likely there are some hints to that way in there somewhere.

smcameron
A: 

Here's something that is trivially unrollable:

int result= value;

int shift_accumulator= value;

for (int i= 0; i<5; ++i)
{
 result += shift_accumulator & (-(k & 1)); // replace with isel if appropriate
 shift_accumulator += shift_accumulator;
 k >>= 1;
}
MSN
+2  A: 

Here you go...

I decided to try these out as well since Mike Acton claimed it would be faster than using the CELL/PS3 microcoded shift on his CellPerformance site where he suggests to avoid the indirect shift. However, in all my tests, using the microcoded version was not only faster than a full generic branch-free replacement for indirect shift, it takes way less memory for the code (1 instruction).

The only reason I did these as templates was to get the right output for both signed (usually arithmetic) and unsigned (logical) shifts.

template <typename T> FORCEINLINE T VariableShiftLeft(T nVal, int nShift)
{   // 31-bit shift capability (Rolls over at 32-bits)
    const int bMask1=-(1&nShift);
    const int bMask2=-(1&(nShift>>1));
    const int bMask3=-(1&(nShift>>2));
    const int bMask4=-(1&(nShift>>3));
    const int bMask5=-(1&(nShift>>4));
    nVal=(nVal&bMask1) + nVal;   //nVal=((nVal<<1)&bMask1) | (nVal&(~bMask1));
    nVal=((nVal<<(1<<1))&bMask2) | (nVal&(~bMask2));
    nVal=((nVal<<(1<<2))&bMask3) | (nVal&(~bMask3));
    nVal=((nVal<<(1<<3))&bMask4) | (nVal&(~bMask4));
    nVal=((nVal<<(1<<4))&bMask5) | (nVal&(~bMask5));
    return(nVal);
}
template <typename T> FORCEINLINE T VariableShiftRight(T nVal, int nShift)
{   // 31-bit shift capability (Rolls over at 32-bits)
    const int bMask1=-(1&nShift);
    const int bMask2=-(1&(nShift>>1));
    const int bMask3=-(1&(nShift>>2));
    const int bMask4=-(1&(nShift>>3));
    const int bMask5=-(1&(nShift>>4));
    nVal=((nVal>>1)&bMask1) | (nVal&(~bMask1));
    nVal=((nVal>>(1<<1))&bMask2) | (nVal&(~bMask2));
    nVal=((nVal>>(1<<2))&bMask3) | (nVal&(~bMask3));
    nVal=((nVal>>(1<<3))&bMask4) | (nVal&(~bMask4));
    nVal=((nVal>>(1<<4))&bMask5) | (nVal&(~bMask5));
    return(nVal);
}

EDIT: Note on isel() I saw your isel() code on your website.

// if a >= 0, return x, else y
int isel( int a, int x, int y )
{
    int mask = a >> 31; // arithmetic shift right, splat out the sign bit
    // mask is 0xFFFFFFFF if (a < 0) and 0x00 otherwise.
    return x + ((y - x) & mask);
};

FWIW, if you rewrite your isel() to do a mask and mask complement, it will be faster on your PowerPC target since the compiler is smart enough to generate an 'andc' opcode. It's the same number of opcodes but there is one fewer result-to-input-register dependency in the opcodes. The two mask operations can also be issued in parallel on a superscalar processor. It can be 2-3 cycles faster if everything is lined up correctly. You just need to change the return to this for the PowerPC versions:

return (x & (~mask)) + (y & mask);
Adisak
Thanks! Yeah, after blundering around for a while I concluded there wasn't a way to beat the microcode here. I suppose it uses micro-ops for which there are no corresponding opcodes in the ISA. Thanks for the improved isel() -- I just used Dawson's, never even thought it could be improved on!
Crashworks
When I first read your post, I thought you had found a magic isel() intrinsic/asm-op that I had somehow missed vs the mask function which would have been very nice. FWIW, you can do it quite quickly on PC as well with CMOVcc asm-ops so it bears in mind possibly having different versions of isel on different target platforms.
Adisak
Oh, and it's probably obvious but the nVal= lines are basically an isel() that has been expanded.
Adisak