I'm working on building an iPhone app with Titanium Mobile 1.0 and I see that it compiles down to a native iPhone binary. How does this work? Seems like it would take a lot of heavy lifting to analyze the JavaScript code and do a direct translation into Objective-C without having a superset language like 280 North's Objective-J and Cappuccino.
Titanium takes your Javascript code, analyzes and preprocesses it and then pre-compiles it into a set of symbols that are resolved based on your applications uses of Titanium APIs. From this symbol hierarchy we can build a symbol dependency matrix that maps to the underlying Titanium library symbols to understand which APIs (and related dependencies, frameworks, etc) specifically your app needs. I'm using the word symbol in a semi-generic way since it's a little different based on the language. In iPhone, the symbol maps to a true C symbol that ultimately maps to a compiled .o file that has been compiled for ARM/i386 architectures. For Java, well, it's more or less a .class file, etc. Once the front end can understand your dependency matrix, we then invoke the SDK compiler (i.e. GCC for iPhone, Java for Android) to then compile your application into the final native binary.
So, a simple way to think about it is that your JS code is compiled almost one to one into the representative symbols in nativeland. There's still an interpreter running in interpreted mode otherwise things like dynamic code wouldn't work. However, its much faster, much more compact and it's about as close to pure native mapping as you can get.
We're obviously still got plenty of room to improve this and working on that. So far in our latest 1.0 testing, it's almost indistinguishable from the same objective-c direct code (since in most cases it's exactly mapped to that). From a CompSci standpoint, we can now however start to optimize things that a human really couldn't easily do that - much like the GCC compiler already does today.