Sampling from a depth buffer in a shader returns values between 0 and 1, as expected. Given the near- and far- clip planes of the camera, how do I calculate the true z value at this point, i.e. the distance from the camera?
From http://web.archive.org/web/20130416194336/http://olivers.posterous.com/linear-depth-in-glsl-for-real
// == Post-process frag shader ===========================================
uniform sampler2D depthBuffTex;
uniform float zNear;
uniform float zFar;
varying vec2 vTexCoord;
void main(void)
{
float z_b = texture2D(depthBuffTex, vTexCoord).x;
float z_n = 2.0 * z_b - 1.0;
float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
}
[edit] So here's the explanation (with 2 mistakes, see Christian's comment below) :
An OpenGL perspective matrix looks like this :
When you multiply this matrix by an homogeneous point [x,y,z,1], it gives you: [don't care, don't care, Az+B, -z] (with A and B the 2 big components in the matrix).
OpenGl next does the perspective division: it divides this vector by its w component. This operation is not done in shaders (except special cases like shadowmapping) but in hardware; you can't control it. w = -z, so the Z value becomes -A/z -B.
We are now in Normalized Device Coordinates. The Z value is between 0 and 1. For some stupid reason, OpenGL requires that it should be moved to the [-1,1] range (just like x and y). A scaling and offset is applied.
This final value is then stored in the buffer.
The above code does the exact opposite :
The opposite function is :
varying float depth; // Linear depth, in world units
void main(void)
{
float A = gl_ProjectionMatrix[2].z;
float B = gl_ProjectionMatrix[3].z;
gl_FragDepth = 0.5*(-A*depth + B) / depth + 0.5;
}