Inspired by this question,
Now visible only for users with > 10k rep
I came up with the following code:
$cat loop.c
int main( int argc, char ** argv )
{
int i = 0;
while( i++ < 2147483647 );
}
$cc -o loop loop.c
$ time ./loop
real 0m11.161s
user 0m10.393s
sys 0m0.012s
$cat Loop.java
class Loop {
public static void main( String [] args ) {
int i = 0;
while( i++ < 2147483647 );
}
}
$javac Loop.java
$time java Loop
real 0m4.578s
user 0m3.980s
sys 0m0.048s
Why does the Java version runs almost 3x faster than the C version? What I'm missing here?
This is run on Ubuntu 9.04 with:
Intel(R) Pentium(R) M @ 1.73GHz
32 bits
EDIT
This is amazing. Using the -O3 option in C optimize the loop and using -server in Java does the same. This are the "optimized times".