While looking at a micro-optimization question that I asked yesterday (here), I found something strange: an or
statement in Java is running slightly faster than looking up a boolean value in an array of booleans.
In my tests, running the below algorithms on long
values from 0 to 1 billion, alg1 is about 2% faster. (I have altered the order in which the algorithms are tested, and I get the same results). My question is: Why is alg1 faster? I would have expected alg2 to be slightly faster since it uses a lookup table, whereas alg1 has to execute 4 comparisons and 3 or operations for 75% of inputs.
private final static boolean alg1(long n)
{
int h = (int)(n & 0xF);
if(h == 0 || h == 1 || h == 4 || h == 9)
{
long tst = (long)Math.sqrt(n);
return tst*tst == n;
}
return false;
}
private final static boolean[] lookup = new boolean[16];
static
{
lookup[0] = lookup[1] = lookup[4] = lookup[9] = true;
}
private final static boolean alg2(long n)
{
if(lookup[(int)(n & 0xF)])
{
long tst = (long)Math.sqrt(n);
return tst*tst == n;
}
else
return false;
}
If you're curious, this code is testing if a number is a perfect square, and utilizes the fact that perfect squares must end in 0, 1, 4, or 9 in hex.