I have a pair of GLSL shaders that give me the depth map of the objects in my scene. What I get now is the distance from each pixel to the camera. What I need is to get the distance from the pixel to the camera plane. Let me illustrate with a little drawing
* |--*
/ |
/ |
C-----* C-----*
\ |
\ |
* |--*
The 3 asterisks are pixels and the C is the camera. The lines from the asterisks are the "depth". In the first case, I get the distance from the pixel to the camera. In the second, I wish to get the distance from each pixel to the plane.
There must be a way to do this by using some projection matrix, but I'm stumped.
Here are the shaders I'm using. Note that eyePosition is camera_position_object_space.
Vertex Shader:
void main() {
position = gl_Vertex.xyz;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Pixel Shader:
uniform vec3 eyePosition;
varying vec3 position;
void main(void) {
vec3 temp = vec3(1.0,1.0,1.0);
float depth = (length(eyePosition - position*temp) - 1.0) / 49.0;
gl_FragColor = vec4(depth, depth, depth, 1.0);
}
You're really trying to do this the hard way. Simply transform things to camera space, and work from there.
varying float distToCamera;
void main()
{
vec4 cs_position = glModelViewMatrix * gl_Vertex;
distToCamera = -cs_position.z;
gl_Position = gl_ProjectionMatrix * cs_position;
}
In camera space (the space where everything is relative to the position/orientation of the camera), the planar distance to a vertex is just the negative of the Z coordinate (higher negative Z is farther away).
So your fragment shader doesn't even need eyePosition
; the "depth" comes directly from the vertex shader.