views:

657

answers:

5

The canonical JVM implementation from Sun applies some pretty sophisticated optimization to bytecode to obtain near-native execution speeds after the code has been run a few times. The question is, why isn't this compiled code cached to disk for use during subsequent uses of the same function/class. As it stands, every time a program is executed, the JIT compiler kicks in afresh, rather than using a pre-compiled version of the code. Wouldn't adding this feature add a significant boost to the initial run time of the program, when the bytecode is essentially being interpreted?

+6  A: 

Oracle's JVM is indeed documented to do so -- quoting Oracle,

the compiler can take advantage of Oracle JVM's class resolution model to optionally persist compiled Java methods across database calls, sessions, or instances. Such persistence avoids the overhead of unnecessary recompilations across sessions or instances, when it is known that semantically the Java code has not changed.

I don't know why all sophisticated VM implementations don't offer similar options.

Alex Martelli
Because other sophisticated JVMs don't have a honking great enterprise RDBMS handy to store stuff in :)
skaffman
+10  A: 

Without resorting to cut'n'paste of the link that @MYYN posted, I suspect this is because the optimisations that the JVM performs are not static, but rather dynamic, based on the data patterns as well as code patterns. It's likely that these data patterns will change during the application's lifetime, rendering the cached optimisations less than optimal.

So you'd need a mechanism to establish whether than saved optimisations were still optimal, at which point you might as well just re-optimise on the fly.

skaffman
...or you could just offer persistence as an *option*, like Oracle's JVM does -- empower *advanced programmers* to optimize their application's performance when and where they just **know** the patterns are **not** changing, under their responsibility. Why not?!
Alex Martelli
Because it's probably not worth it. If neither SUN, IBM nor BEA considered it worthwhile for their performance JVMs, there's going to be a good reason for it. Maybe their RT optimisation is faster than Oracle's, which is why Oracle caches it.
skaffman
+1  A: 

I do not know the actual reasons, not being in any way involved in the JVM implementation, but I can think of some plausible ones:

  • The idea of Java is to be a write-once-run-anywhere language, and putting precompiled stuff into the class file is kind of violating that (only "kind of" because of course the actual byte code would still be there)
  • It would increase the class file sizes because you would have the same code there multiple times, especially if you happen to run the same program under multiple different JVMs (which is not really uncommon, when you consider different versions to be different JVMs, which you really have to do)
  • The class files themselves might not be writable (though it would be pretty easy to check for that)
  • The JVM optimizations are partially based on run-time information and on other runs they might not be as applicable (though they should still provide some benefit)

But I really am guessing, and as you can see, I don't really think any of my reasons are actual show-stoppers. I figure Sun just don't consider this support as a priority, and maybe my first reason is close to the truth, as doing this habitually might also lead people into thinking that Java class files really need a separate version for each VM instead of being cross-platform.

My preferred way would actually be to have a separate bytecode-to-native translator that you could use to do something like this explicitly beforehand, creating class files that are explicitly built for a specific VM, with possibly the original bytecode in them so that you can run with different VMs too. But that probably comes from my experience: I've been mostly doing Java ME, where it really hurts that the Java compiler isn't smarter about compilation.

jk
there is a spot in the classfile for such things, infact that was the original intent (store the JIT'ed code as an attribute in the classfile).
TofuBeer
@TofuBeer: Thanks for the confirmation. I suspected that might be the case (that's what I would have done), but wasn't sure. Edited to remove that as a possible reason.
jk
I think you hit the nail on the head with your last bullet point. The others could be worked around, but that last part is, I think, the main reason JITed code is not persisted.
musicfreak
The last paragraph about the explicit bytecode-to-native compiler is what you currently have in .NET with NGEN (http://msdn.microsoft.com/en-us/library/6t9t5wcf(VS.71).aspx).
Martinho Fernandes
+2  A: 

I can think of a couple of reasons why you wouldn't want to do this:

  • bytecode instrumentation on class loading
  • the concept of an application is rather wooly in Java - you might have a main entry point, but there is no limit to how far class loading can extend - in many applications, code can be loaded and disposed of never to be seen again (e.g. in a Servlet container)

I'd say that specialised rather than general cases would benefit from such a feature. Still, good question.

McDowell
+2  A: 

Excelsior JET has a caching JIT compiler since version 2.0, released back in 2001. Moreover, its AOT compiler may recompile the cache into a single DLL/shared object using all optimizations.

Dmitry Leskov
Yes, but the question was about the canonical JVM, i.e., Sun's JVM. I'm well aware that there are several AOT compilers for Java as well as other caching JVMs.
Chinmay Kanchi