OpenCV: get perspective matrix from translation & rotation

Dave picture Dave · Apr 24, 2014 · Viewed 9k times · Source

I'm trying to verify my camera calibration, so I'd like to rectify the calibration images. I expect that this will involve using a call to warpPerspective but I do not see an obvious function that takes the camera matrix, and the rotation and translation vectors to generate the perspective matrix for this call.

Essentially I want to do the process described here (see especially the images towards the end) but starting with a known camera model and pose.

Is there a straightforward function call that takes the camera intrinsic and extrinsic parameters and computes the perspective matrix for use in warpPerspective?

I'll be calling warpPerspective after having called undistort on the image.

In principle, I could derive the solution by solving the system of equations defined at the top of the opencv camera calibration documentation after specifying the constraint Z=0, but I figure that there must be a canned routine that will allow me to orthorectify my test images.

In my seearches, I'm finding it hard to wade through all of the stereo calibration results -- I only have one camera, but want to rectify the image under the constraint that I'm only looking a a planar test pattern.

Answer

BConic picture BConic · Apr 25, 2014

Actually there is no need to involve an orthographic camera. Here is how you can get the appropriate perspective transform.

If you calibrated the camera using cv::calibrateCamera, you obtained a camera matrix K a vector of lens distortion coefficients D for your camera and, for each image that you used, a rotation vector rvec (which you can convert to a 3x3 matrix R using cv::rodrigues, doc) and a translation vector T. Consider one of these images and the associated R and T. After you called cv::undistort using the distortion coefficients, the image will be like it was acquired by a camera of projection matrix K * [ R | T ].

Basically (as @DavidNilosek intuited), you want to cancel the rotation and get the image as if it was acquired by the projection matrix of form K * [ I | -C ] where C=-R.inv()*T is the camera position. For that, you have to apply the following transformation:

Hr = K * R.inv() * K.inv()

The only potential problem is that the warped image might go outside the visible part of the image plane. Hence, you can use an additional translation to solve that issue, as follows:

     [ 1  0  |         ]
Ht = [ 0  1  | -K*C/Cz ]
     [ 0  0  |         ]

where Cz is the component of C along the Oz axis.

Finally, with the definitions above, H = Ht * Hr is a rectifying perspective transform for the considered image.