It is certain that at least one C++ compiler will recognize the situation (when the 0 is known at compile time) and make it a no-op:
Source
inline int shift( int what, int bitcount)
{
return what >> bitcount ;
}
int f() {
return shift(42,0);
}
Compiler switches
icpc -S -O3 -mssse3 -fp-model fast=2 bitsh.C
Intel C++ 11.0 assembly
# -- Begin _Z1fv
# mark_begin;
.align 16,0x90
.globl _Z1fv
_Z1fv:
..B1.1: # Preds ..B1.0
movl $42, %eax #7.10
ret #7.10
.align 16,0x90
# LOE
# mark_end;
.type _Z1fv,@function
.size _Z1fv,.-_Z1fv
.data
# -- End _Z1fv
.data
.section .note.GNU-stack, ""
# End
As you can see at ..B1.1, Intel compiles "return shift(42,0)" to "return 42."
Intel 11 also culls the shift for these two variations:
int g() {
int a = 5;
int b = 5;
return shift(42,a-b);
}
int h(int k) {
return shift(42,k*0);
}
For the case when the shift value is unknowable at compile time ...
int egad(int m, int n) {
return shift(42,m-n);
}
... the shift cannot be avoided ...
# -- Begin _Z4egadii
# mark_begin;
.align 16,0x90
.globl _Z4egadii
_Z4egadii:
# parameter 1: 4 + %esp
# parameter 2: 8 + %esp
..B1.1: # Preds ..B1.0
movl 4(%esp), %ecx #20.5
subl 8(%esp), %ecx #21.21
movl $42, %eax #21.10
shrl %cl, %eax #21.10
ret #21.10
.align 16,0x90
# LOE
# mark_end;
... but at least it's inlined so there's no call overhead.
Bonus assembly: volatile is expensive. The source ...
int g() {
int a = 5;
volatile int b = 5;
return shift(42,a-b);
}
... instead of a no-op, compiles to ...
..B3.1: # Preds ..B3.0
pushl %esi #10.9
movl $5, (%esp) #12.18
movl (%esp), %ecx #13.21
negl %ecx #13.21
addl $5, %ecx #13.21
movl $42, %eax #13.10
shrl %cl, %eax #13.10
popl %ecx #13.10
ret #13.10
.align 16,0x90
# LOE
# mark_end;
... so if you're working on a machine where values you push on the stack might not be the same when you pop them, well, this missed optimization is likely the least of your troubles.