Beware the perils of micro-benchmarking!!!
I took the code, wrapped a method around the outside, and ran that 10 times in a loop. Results:
50, 3,
3, 0,
0, 0,
0, 0,
....
Without some actual code in the loops, the compilers are able to figure out that the loops do no useful work and optimize them away completely. Given the measured performance, I suspect that this optimization might have been done by javac
.
Lesson 1: Compilers will often optimize away code that does useless "work". The smarter the compiler is, the more likely it is that this sort of thing will happen. If you don't allow for this in the way you code it, a benchmark can be meaningless.
So I then added the following simple calculation in both loops if (i < 2 * j) longK++;
and made the test method return the final value of longK
. Results:
32267, 33382,
34542, 30136,
12893, 12900,
12897, 12889,
12904, 12891,
12880, 12891,
....
We have obviously stopped the compilers optimizing the loop away. But now we see the effects of JVM warmups in (in this case) the first two pairs of loop iterations. The first two pairs of iterations (one method call) are probably run purely in interpreted mode. And it looks the third iteration might actually be running in parallel with the JIT. By the third pair of iterations, we are most likely running pure native code. And from then on, the difference between the timing of the two versions of loop is simply noise.
Lesson 2: always take into account the effect of JVM warmup. This can seriously distort benchmark results, both micro and macro.
Conclusion - once the JVM has warmed up, there is no measurable difference between the two versions of the loop.