tags:

views:

141

answers:

6

Possible Duplicate:
JIT compiler vs offline compilers

So until a few minutes ago I didn't really understand what the difference between a JIT compiler and an interpreter is. Browsing through SO, I found the answer, which brought up the question in the title. As far as I've found, JIT compilers have the benefit of being able to use the specific processor it's running on and can thus make better optimized programs. Could somebody please give me a comparison of the pros and cons of each?

+7  A: 

Interpreter, JIT Compiler and "Offline" Compiler

Difference between a JIT compiler and an interpreter

To keep it simple, let's just say that an interpreter will run the bytecode (intermediate code/language). When the VM/interpreter decides it is better to do so, the JIT compilation mechanism will translate that same bytecode into native code targetted for the hardware in question, with focus on the type of optimizations requested.

So basically a JIT might produce a faster executable but take way longer to compile?

I think what you are missing is that the JIT compilation happens at runtime and not compile time (unlike an "offline" compiler)

JIT Compilation has overhead

Compiling code is not free, takes time also. If it invests time on compiling it and then goes to run it only a few times, it might not have made a good trade. So the VM still has to decide what to define as a "hot spot" and JIT-compile it.

Allow me to give examples on the Java virtual machine (JVM):

The JVM can take switches with which you can define the threshold after which the code will be JIT compiled. -XX:CompileThreshold=10000

To illustrate the cost of the JIT compilation time, suppose you set that threshold to 20, and have a piece of code that needs to run 21 times. What happens is after it runs 20 times, the VM will now invest some time into JIT compiling it. Now you have native code from the JIT compilation, but it will only run for one more time (the 21), which may not have brought any performance boost to make up for the JIT process.

I hope this illustrates it.

Here is a JVM switch that shows the time spent on JIT compilation -XX:-CITime "Prints time spent in JIT Compiler"

Side Note: I don't think it's a "big deal", just something I wanted to point out since you brought up the question.

Bakkal
So basically a JIT might produce a faster executable but take way longer to compile?
Maulrus
I wouldn't say that it is way longer, but there is overhead while the program is running.
TofuBeer
Oh I think I misunderstood what a JIT compiler is. I has assumed it would just fully compile the program when it was first run. Does it also function like an interpreter?
Maulrus
@Maulrus: it depends on the goals of the JIT compiler, and the kind of optimizations the designers wanted to support. Some JITs do a full recompilation at startup, others compile parts as they determine what needs the optimization the most.
Ken Bloom
Okay, I think I understand well enough now. Thanks!
Maulrus
A: 

I would say one real disadvantage of using a JIT compiler (more of a side effect really), is that is it easy to dissassemble the IL into human readable code.

Mitch Wheat
Not so! The Java language and Java bytecodes demonstrate this property, but JRuby programs compiled to Java bytecode can't be comprehensibly decompiled. Ditto for PowerPC programs JITted into x86 machine code using Apple's Rosetta.
Ken Bloom
It's also not the *only* disadvantage.
Ken Bloom
Scala programs go through several levels of syntactic desugaring (which makes them harder, though not impossible, to read) before being compiled into Java bytecode.
Ken Bloom
@Ken Bloom: sure, there are obfuscating programs for .NET, but not as vanilla.
Mitch Wheat
JITs don't have to be for IL. HP Labs' Dynamo project was basically a JIT for HP/UX machine code. The JIT and the underlying architecture are completely independent.
Ken
@Ken: I don't recall saying that "The JIT and the underlying architecture were dependent"? If you are JIT'ing then you have an intermediate language (IL)
Mitch Wheat
Mitch: I can't figure out what you're implying. When Dynamo was created, PA-RISC machine code *became* an IL? Or Dynamo is not a JIT because PA-RISC chips exist? (Though hardware JVM chips exist, and I don't think anybody claims they make the Hotspot JIT not-a-JIT.) Or something else?
Ken
+1  A: 

JIT compilers have a lot more memory overhead since they need to load a compiler and interpreter in addition to the runtime libraries and compiled code that an AOT (ahead-of-time) compiled program requires.

Ken Bloom
A: 

JIT compilers are harder to write (not the whole story, but worth mentioning).

Hogan
+1  A: 

JIT compilation doesn't inherently mean it is easy to disassemble. That is more implementation-dependent, such as with Java binaries. Note, however, that JIT can be applied to any kind of executable, whether it is Java, Python or even an already-compiled binary from C++ or similar. (IIRC, the Dynamo project involved re-compiling such binaries on-the-fly to increase performance.)

The trade-off for JIT compilation is that while the process's goal is to increase runtime performance, the process actually occurs at runtime as well, and so it incurs overhead while analyzing, compiling, and validating code fragments. If the implementation is inefficient or not enough optimizations occur, then it actually produces a performance degradation.

The other trade-off is that in some cases the JIT compilation can be very wasteful. For example, consider a self-modifying executable. If you compile a fragment of code, and then the executable modifies that fragment, you have to throw away the compiled fragment and then re-analyze that segment to determine if it is worth re-compiling. If this happens frequently, there is a significant performance hit.

Finally, there is a hit in memory consumption, as compiled code fragments must reside in memory in order to be effective. This can make it impractical for devices with limited amounts of memory, or else extremely difficult to implement well.

Zac
Apple's Rosetta JIT-compiles PowerPC code to x86.
Ken Bloom
Apple's Mac 68K emulator (on PCI PowerMacs) also uses JIT compilation.
Ken Bloom
Both examples are a special form JIT compilation known as Binary Translation. (See http://en.wikipedia.org/wiki/Binary_translation ). I'm not particularly familiar with either ("I'm a PC") but I imagine both employ Dynamic Binary Translation.
Zac
+1  A: 

For me, at least, lack of inline ASM is a big one. Once in a while, you just want complete control over every detail of the CPU for some small part of your program. Even when I don't need it for the task at hand, I like the idea that everything that my computer is capable of can, in principle be done within my language.

dsimcha