views:

363

answers:

8

I realize the benefits of bytecode vs. native code (portability).

But say you always know that your code will run on a x86 architecture, why not then compile for x86 and get the performance benefit?

Note that I am assuming there is a performance gain to native code compilation. Some folks have answered that there could in fact be no gain which is news to me..

+7  A: 

The bytecode runs in a Java Virtual Machine that is compiled for (example) Solaris. It will be optimised like heck for that operating system.

In real-world cases, you see often see equal or better performance from Java code at runtime, by virtue of building on the virtual machine's code for things like memory management - that code will have been evolving and maturing for years.

There's more benefits to building for the JVM than just portability - for example, every time a new JVM is released your compiled bytecode gets any optimisations, algorithmic improvements etc. that come from the best in the business. On the other hand, once you've compiled your C code, that's it.

Brabster
Right, but the Solaris JVM compiles the Java bytecode instructions to native Solaris instructions. Which takes time.
Marcus
Do you really think that compile time is a bottleneck?
splix
Can you actually prove a performance gain with native code you would write yourself?
Brabster
I am just assuming there would be at least some performance gain. This is a hypothetical question - no specific use case exists (in my case).
Marcus
+5  A: 

Because with Just-In-Time compilation, there is trivial performance benefit.

Actually, many things JIT can actually do faster.

Milan Ramaiya
Why then so many desktop apps written in java seem to be slower?
Jenea
Because the UI leyer in Java is a bit slow, not because the code runs slowly..
nos
UI (Swing) development is difficult to do correctly, and most programmers don't know what the hell they're doing. Swing apps -can- be made extremely responsive, but most programmers don't understand the Swing threading and event system, resulting in horribly inefficient code.
Milan Ramaiya
Well I disagree. Look here http://reverseblade.blogspot.com/2009/02/c-versus-c-versus-java-performance.html. There was also another example with c# and c++. But even on this chart up to 100% difference in performance between managed and unmanaged.
Jenea
+4  A: 

It's already will be compiled by JIT into Solaris native code, after run. You can't receive any other benefits if you compile it before uploading at target site.

splix
+12  A: 

Because the performance gain (if any) is not worth the trouble.

Also, garbage collection is very important for performance. Chances are that the GC of the JVM is better than the one embedded in the compiled executable, say with GCJ.

And just in time compilation can even result in better performance because the JIT has more information are run-time available to optimize the compilation than the compiler at compile-time. See the wikipedia page on JIT.

ewernli
The second point is important. The JIT can produce better compilation because it has runtime information not available during static compilation.
Steve Kuo
It can, or it does? Implementing a highly optimizing dynamic compiler is way more difficult than a static one, especially if you are targeting devices with low computing power (mobile, embedded, etc.) Not all just apps run for months on high-end servers with abundance of CPU and RAM.
Dmitry Leskov
Another point is that static compilers can take profile information as input. Ours does that to optimize startup time.
Dmitry Leskov
What happens is not Just in time compilation, which means that code is compiled just before it is run. Hotspot is actually much more advanced and does some form of incremental optimization. Instead of running very fast at the last moment, it looks at the actually used code paths (hotspots) and optimizes those based on the actual runtime profile.Jit in general is not faster than static compilation because it has less time available for compilation. Hotspot and other compile and recompile as you go technologies can be faster.
Paul de Vrieze
+10  A: 

"Solaris" is an operating system, not a CPU architecture. The JVM installed on the actual machine will compile to the native CPU instructions. Solaris could be SPARC, x86, or x86-64 architecture.

Also, the JIT compiler can make processor-specific optimisations depending on which actual CPU family you have. For example, different instruction sequences are faster on Intel CPUs than on AMD CPUs, and a JIT compiler for your exact platform can take advantage of this information to produce highly optimised code.

Greg Hewgill
Thanks, I updated the post re: CPU architecture.
Marcus
@Marcus Also note that the JIT compiler optimizes the code while running; frequently followed code paths can (and will) therefore be optimized. And as the JIT has more / higher level information it can optimize better that a CPU can optimize assembly.
extraneon
+4  A: 

You may, or may not get a performance benefit. But more likely you would get a performance penalty: JIT optimization is not possible with static compilation, so the performance would be only as good as the compiler can make it "blindfolded" (without actually profiling the program and optimizing it accordingly, which is what JIT compilers such as HotSpot does).

It's intuitively quite surprising how cheap (resource-wise) compiling is, and how much can be automatically optimized by just observing the running program. Black magic, but good for us :-)

Joonas Pulakka
+1  A: 

All this talk of JITs is about seven years out of date BTW. The technology concerned now is called HotSpot and it isn't just a JIT.

EJP
A: 

I guess because JIT (just in time) compilation is very advanced.

fastcodejava