I am confused about JavaScript engines right now. I know that V8 was a big deal because it compiled JavaScript to native code.
Then I started reading about Mozilla SpiderMonkey, which from what I understand is written in C and can compile JavaScript. So how is this different from V8 and if this is true, why does Firefox not do this?
Finally, does Rhino literally compile the JavaScript to Java byte code so you would get all the speed advantages of Java? If not, why do people not run V8 when writing scripts on their desktops?
There are various approaches to JavaScript execution, even when doing JIT. V8 and Nitro (formerly known as SquirrelFish Extreme) choose to do a whole-method JIT, meaning that they compile all JavaScript code down to native instructions when they encounter script, and then simply execute that as if it was compiled C code. SpiderMonkey uses a "tracing" JIT instead, which first compiles the script to bytecode and interprets it, but monitors the execution, looking for "hot spots" such as loops. When it detects one, it then compiles just that hot path to machine code and executes that in the future.
Both approaches have upsides and downsides. Whole-method JIT ensures that all JavaScript that is executed will be compiled and run as machine code and not interpreted, which in general should be faster. However, depending on the implementation it may mean that the engine spends time compiling code that will never be executed, or may only be executed once, and is not performance critical. In addition, this compiled code must be stored in memory, so this can lead to higher memory usage.
The tracing JIT as implemented in SpiderMonkey can produce extremely specialized code compared to a whole-method JIT, since it has already executed the code and can speculate on the types of variables (such as treating the index variable in a for loop as a native integer), where a whole-method JIT would have to treat the variable as an object because JavaScript is untyped and the type could change (SpiderMonkey will simply "fall off" trace if the assumption fails, and return to interpreting bytecode). However, SpiderMonkey's tracing JIT currently does not work efficiently on code with many branches, as the traces are optimized for single paths of execution. In addition, there's some overhead involved in monitoring execution before deciding to compile a trace, and then switching execution to that trace. Also, if the tracer makes an assumption that is later violated (such as a variable changing type), the cost of falling off trace and switching back to interpreting is likely to be higher than with a whole-method JIT.