Can someone help me with the depth of field implementation in Ray Tracer please?
I am using a simple pin-hole camera model as shown below. I need to know how we can generate DOF effect using pin-hole camera model? (image is taken from wikipedia)
My basic ray tracer in working fine.
I have eye at (0,0,0,1) with direction at (dx, dy , 1.0f, 0.0f) where
float dx = (x * (1.0 / Imgwidth) ) - 0.5;
float dy = (y * (1.0 / Imgheight) ) - 0.5;
Now everywhere I read they are talking about sampling a lens which should be placed between the image plane and the Scene. for example as shown below(image taken from wikipedia):
How can I introduce a lens in front of an image plane if the rays are coming from one single point position(camera or eye)?
If someone can help than it will be great!
thank you
There are 3 ways to do this:
The physically-correct DOF would require multiple renders of the scene. Cameras have depth-of-field because they are not really a pinhole model. Instead, they have an aperture that allows light in within a certain diameter. This is equivalent to taking a pinhole camera and taking a lot of pictures within that aperture and averaging them.
So basically, you need to rotate your camera slightly multiple times around your focus point, render the entire scene, accumulate the output colour in a buffer and divide all values by the number of renders.
A simple post processing effect - render not only the scene colour, but also its depth, then use this depth to control blur effect strength. Note that this is technique requires some tricks to get seamless transitions between objects at different blur levels.
A more complex post processing effect - create a depth buffer as previously and then use it to render an aperture-shaped particle for every pixel of the original scene. Use the depth to control the particle size just as you would use it for blur effect strength.
(1) gives best results, but is the most expensive technique; (2) is cheapest, (3) is quite tricky but provides good cost-effect balance.