Android provides both the rotation vector sensor and the orientation sensor. I know they returns different data, because for vector sensor we have sin of angles, in orientation sensor we have angles. But what's the conceptual difference? I can't understand from the docs. Which one provides the orientation of the device in the three-dimensional space? I'm confused!
The older ORIENTATION sensors report orientation using three angles. The problem with this coordinate systems is that it suffers from "gimbal lock": when the actual orientation vector is close to vertical, one of the coordinates goes to 90 or -90 degrees, and the remaining two coordinates become either uninterpretable or dangerously denormalized.
The newer ROTATION sensors report orientation using quarternion coordinates, which are more complicated to work with, but don't suffer from the Gimbal lock problem. When orientation is reported using quaternion coordinates, you can determine the precise orientation of the device no matter what the orientation is.
Quaternions are also more computationally efficient. You don't need to call expensive trig functions to apply a quaternion rotation to a vector. If the w coordinate isn't supplied you can still compute w with a single sqrt call, compared to three sin and three cos function calls for orientation coordinates in the three angle Euler form.
Short story: the ORIENTATION-style sensors were done wrong. They were fixed in API 9, by replacing them with ROTATION sensors.