How do I retrieve the rotation matrix, the translation vector and maybe some scaling factors of each camera using OpenCV when I have pictures of an object from the view of each of these cameras? For every picture I have the image coordinates of several feature points. Not all feature points are visible in all of the pictures. I want to map the computed 3D coordinates of the feature points of the object to a slightly different object to align the shape of the second object to the first object.
I heard it is possible using cv::calibrateCamera(...)
but I can't get quite through it...
Does someone have experiences with that kind of problem?
I was confronted with the same problem as you, in OpenCV. I had a stereo image pair and I wanted to computed the external parameters of the cameras and the world coordinates of all observed points. This problem has been treated here:
Berthold K. P. Horn. Relative orientation revisited. Berthold K. P. Horn. Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 545 Technology ...
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.64.4700
However, I wasn't able to find a suitable implementation of this problem (perhaps you will find one). Due to time limitations I did not have time to understand all the maths in this paper and implement it myself, so I came up with a quick-and-dirty solution that works for me. I will explain what I did to solve it:
Assuming we have two cameras, where the first camera has external parameters RT = Matx::eye(). Now make a guess about the the rotation R of the second camera. For every pair of image points observed in both images, we compute the directions of their corresponding rays in world coordinates and store them in a 2d-array dirs (EDIT: The internal camera parameters are assumed to be known). We can do this since we assume that we know the orientation of every camera. Now we build an overdetermined linear system AC = 0 where C is the centre of the second camera. I provide you with the function to compute A:
Mat buildA(Matx<double, 3, 3> &R, Array<Vec3d, 2> dirs)
{
CV_Assert(dirs.size(0) == 2);
int pointCount = dirs.size(1);
Mat A(pointCount, 3, DataType<double>::type);
Vec3d *a = (Vec3d *)A.data;
for (int i = 0; i < pointCount; i++)
{
a[i] = dirs(0, i).cross(toVec(R*dirs(1, i)));
double length = norm(a[i]);
if (length == 0.0)
{
CV_Assert(false);
}
else
{
a[i] *= (1.0/length);
}
}
return A;
}
Then calling cv::SVD::solveZ(A) will give you the least-squares solution of norm 1 to this system. This way, you obtain the rotation and translation of the second camera. However, since I just made a guess about the rotation of the second camera, I make several guesses about its rotation (parameterized using a 3x1 vector omega from which i compute the rotation matrix using cv::Rodrigues) and then I refine this guess by solving the system AC = 0 repetedly in a Levenberg-Marquardt optimizer with numeric jacobian. It works for me but it is a bit dirty, so you if you have time, I encourage you to implement what is explained in the paper.
EDIT:
Here is the routine in the Levenberg-Marquardt optimizer for evaluating the vector of residues:
void Stereo::eval(Mat &X, Mat &residues, Mat &weights)
{
Matx<double, 3, 3> R2Ref = getRot(X); // Map the 3x1 euler angle to a rotation matrix
Mat A = buildA(R2Ref, _dirs); // Compute the A matrix that measures the distance between ray pairs
Vec3d c;
Mat cMat(c, false);
SVD::solveZ(A, cMat); // Find the optimum camera centre of the second camera at distance 1 from the first camera
residues = A*cMat; // Compute the output vector whose length we are minimizing
weights.setTo(1.0);
}
By the way, I searched a little more on the internet and found some other code that could be useful for computing the relative orientation between cameras. I haven't tried any code yet, but it seems useful:
http://www9.in.tum.de/praktika/ppbv.WS02/doc/html/reference/cpp/toc_tools_stereo.html