Testing a fundamental matrix

noisecapella picture noisecapella · Aug 23, 2012 · Viewed 7.7k times · Source

My questions are:

  • How do I figure out if my fundamental matrix is correct?
  • Is the code I posted below a good effort toward that?

My end goal is to do some sort of 3D reconstruction. Right now I'm trying to calculate the fundamental matrix so that I can estimate the difference between the two cameras. I'm doing this within openFrameworks, using the ofxCv addon, but for the most part it's just pure OpenCV. It's difficult to post code which isolates the problem since ofxCv is also in development.

My code basically reads in two 640x480 frames taken by my webcam from slightly different positions (basically just sliding the laptop a little bit horizontally). I already have a calibration matrix for it, obtained from ofxCv's calibration code, which uses findChessboardCorners. The undistortion example code seems to indicate that the calibration matrix is accurate. It calculates the optical flow between the pictures (either calcOpticalFlowPyrLK or calcOpticalFlowFarneback), and feeds those point pairs to findFundamentalMatrix.

To test if the fundamental matrix is valid, I decomposed it to a rotation and translation matrix. I then multiplied the rotation matrix by the points of the second image, to see what the rotation difference between the cameras was. I figured that any difference should be small, but I'm getting big differences.

Here's the fundamental and rotation matrix of my last code, if it helps:

fund: [-8.413948689969405e-07, -0.0001918870646474247, 0.06783422344973795;
    0.0001877654679452431, 8.522397812179886e-06, 0.311671691674232;
    -0.06780237856576941, -0.3177275967586101, 1]
R: [0.8081771697692786, -0.1096128431920695, -0.5786490187247098;
    -0.1062963539438068, -0.9935398408215166, 0.03974506055610323;
    -0.5792674230456705, 0.02938723035105822, -0.8146076621848839]
t: [0, 0.3019063882496216, -0.05799044915951077;
    -0.3019063882496216, 0, -0.9515721940769112;
    0.05799044915951077, 0.9515721940769112, 0]

Here's my portion of the code, which occurs after the second picture is taken:

const ofImage& image1 = images[images.size() - 2];
const ofImage& image2 = images[images.size() - 1];

std::vector<cv::Point2f> points1 = flow->getPointsPrev();
std::vector<cv::Point2f> points2 = flow->getPointsNext();

std::vector<cv::KeyPoint> keyPoints1 = convertFrom(points1);
std::vector<cv::KeyPoint> keyPoints2 = convertFrom(points2);

std::cout << "points1: " << points1.size() << std::endl;
std::cout << "points2: " << points2.size() << std::endl;


fundamentalMatrix = (cv::Mat)cv::findFundamentalMat(points1, points2);
cv::Mat cameraMatrix = (cv::Mat)calibration.getDistortedIntrinsics().getCameraMatrix();
cv::Mat cameraMatrixInv = cameraMatrix.inv();
std::cout << "fund: " << fundamentalMatrix << std::endl;

essentialMatrix = cameraMatrix.t() * fundamentalMatrix * cameraMatrix;

cv::SVD svd(essentialMatrix);
Matx33d W(0,-1,0,   //HZ 9.13
          1,0,0,
          0,0,1);

cv::Mat_<double> R = svd.u * Mat(W).inv() * svd.vt; //HZ 9.19

std::cout << "R: " << (cv::Mat)R << std::endl;
Matx33d Z(0, -1, 0,
          1, 0, 0,
          0, 0, 0);
cv::Mat_<double> t = svd.vt.t() * Mat(Z) * svd.vt;
std::cout << "t: " << (cv::Mat)t << std::endl;

Vec3d tVec = Vec3d(t(1,2), t(2,0), t(0,1));

Matx34d P1 = Matx34d(R(0,0),    R(0,1), R(0,2), tVec(0),
                     R(1,0),    R(1,1), R(1,2), tVec(1),
                     R(2,0),    R(2,1), R(2,2), tVec(2));
ofMatrix4x4 ofR(R(0,0),    R(0,1), R(0,2), 0,
                R(1,0),    R(1,1), R(1,2), 0,
                R(2,0),    R(2,1), R(2,2), 0,
                0, 0, 0, 1);
ofRs.push_back(ofR);

cv::Matx34d P(1,0,0,0,
              0,1,0,0,
              0,0,1,0);

for (int y = 0; y < image1.height; y += 10) {
    for (int x = 0; x < image1.width; x += 10) {
        Vec3d vec(x, y, 0);

        Point3d point1(vec.val[0], vec.val[1], vec.val[2]);
        Vec3d result = (cv::Mat)((cv::Mat)R * (cv::Mat)vec);
        Point3d point2 = result;


        mesh.addColor(image1.getColor(x, y));
        mesh.addVertex(ofVec3f(point1.x, point1.y, point1.z));

        mesh.addColor(image2.getColor(x, y));
        mesh.addVertex(ofVec3f(point2.x, point2.y, point2.z));
    }
}

Any ideas? Does my fundamental matrix look correct, or do I have the wrong idea in testing it?

Answer

Ankur picture Ankur · Aug 24, 2012

If you want to find out if your Fundamental Matrix is correct, you should compute error. Using the epipolar constraint equation, you can check how close the detected features in one image lie on the epipolar lines of the other image. Ideally, these dot products should sum to 0, and thus, the calibration error is computed as the sum of absolute distances (SAD). The mean of the SAD is reported as stereo calibration error. Basically, you are computing SAD of the computed features in image_left (could be chessboard corners) from the corresponding epipolar lines. This error is measured in pixel^2, anything below 1 is acceptable.

OpenCV has code examples, look at the Stereo Calibrate cpp file, it shows you how to compute this error. https://code.ros.org/trac/opencv/browser/trunk/opencv/samples/c/stereo_calib.cpp?rev=2614 Look at "avgErr" Lines 260-269

Ankur