JIT vs Interpreters

Manoj picture Manoj · Sep 15, 2010 · Viewed 24.8k times · Source

I couldn't find the difference between JIT and Interpreters.

Jit is intermediary to Interpreters and Compilers. During runtime, it converts byte code to machine code ( JVM or Actual Machine ?) For the next time, it takes from the cache and runs Am I right?

Interpreters will directly execute bytecode without transforming it into machine code. Is that right?

How the real processor in our pc will understand the instruction.?

Please clear my doubts.

Answer

KGhatak picture KGhatak · Nov 29, 2016

First thing first:
With JVM, both interpreter and compiler (the JVM compiler and not the source-code compiler like javac) produce native code (aka Machine language code for the underlying physical CPU like x86) from byte code.

What's the difference then:
The difference is in how they generate the native code, how optimized it is as well how costly the optimization is. Informally, an interpreter pretty much converts each byte-code instruction to corresponding native instruction by looking up a predefined JVM-instruction to machine instruction mapping (see below pic). Interestingly, a further speedup in execution can be achieved, if we take a section of byte-code and convert it into machine code - because considering a whole logical section often provides rooms for optimization as opposed to converting (interpreting) each line in isolation (to machine instruction). This very act of converting a section of byte-code into (presumably optimized) machine instruction is called compiling (in the current context). When the compilation is done at run-time, the compiler is called JIT compiler.

enter image description here

The co-relation and co-ordination:
Since Java designer went for (hardware & OS) portability, they had chosen interpreter architecture (as opposed to c style compiling, assembling, and linking). However, in order to achieve more speed up, a compiler is also optionally added to a JVM. Nonetheless, as a program goes on being interpreted (and executed in physical CPU) "hotspot"s are detected by JVM and statistics are generated. Consequently, using statistics from interpreter, those sections become candidate for compilation (optimized native code). It is in fact done on-the-fly (thus JIT compiler) and the compiled machine instructions are used subsequently (rather than being interpreted). In a natural way, JVM also caches such compiled pieces of code.

Words of caution:
These are pretty much the fundamental concepts. If an actual implementer of JVM, does it a bit different way, don't get surprised. So could be the case for VM's in other languages.

Words of caution:
Statements like "interpreter executes byte code in virtual processor", "interpreter executes byte code directly", etc. are all correct as long as you understand that in the end there is a set of machine instructions that have to run in a physical hardware.

Some Good References: [I've not done extensive search though]

  • [paper] Instruction Folding in a Hardware-Translation Based Java Virtual Machine by Hitoshi Oi
  • [book] Computer organization and design, 4th ed, D. A. Patterson. (see Fig 2.23)
  • [web-article] JVM performance optimization, Part 2: Compilers, by Eva Andreasson (JavaWorld)

PS: I've used following terms interchangebly - 'native code', 'machine language code', 'machine instructions', etc.