Here are two ways to set an individual bit in C on x86-64:
inline void SetBitC(long *array, int bit) {
//Pure C version
*array |= 1<<bit;
}
inline void SetBitASM(long *array, int bit) {
// Using inline x86 assembly
asm("bts %1,%0" : "+r" (*array) : "g" (bit));
}
Using GCC 4.3 with -O3 -march=core2
options, the C version takes about 90% more time when used with a constant bit
. (Both versions compile to exactly the same assembly code, except that the C version uses an or [1<<num],%rax
instruction instead of a bts [num],%rax
instruction)
When used with a variable bit
, the C version performs better but is still significantly slower than the inline assembly.
Resetting, toggling and checking bits have similar results.
Why does GCC optimize so poorly for such a common operation? Am I doing something wrong with the C version?
Edit: Sorry for the long wait, here is the code I used to benchmark. It actually started as a simple programming problem...
int main() {
// Get the sum of all integers from 1 to 2^28 with bit 11 always set
unsigned long i,j,c=0;
for (i=1; i<(1<<28); i++) {
j = i;
SetBit(&j, 10);
c += j;
}
printf("Result: %lu\n", c);
return 0;
}
gcc -O3 -march=core2 -pg test.c
./a.out
gprof
with ASM: 101.12 0.08 0.08 main
with C: 101.12 0.16 0.16 main
time ./a.out
also gives similar results.