Finding translation and scale on two sets of points to get least square error in their distance?

user1476710 picture user1476710 · Nov 17, 2012 · Viewed 17.8k times · Source

I have two sets of 3D points (original and reconstructed) and correspondence information about pairs - which point from one set represents the second one. I need to find 3D translation and scaling factor which transforms reconstruct set so the sum of square distances would be least (rotation would be nice too, but points are rotated similarly, so this is not main priority and might be omitted in sake of simplicity and speed). And so my question is - is this solved and available somewhere on the Internet? Personally, I would use least square method, but I don't have much time (and although I'm somewhat good at math, I don't use it often, so it would be better for me to avoid it), so I would like to use other's solution if it exists. I prefer solution in C++, for example using OpenCV, but algorithm alone is good enough.

If there is no such solution, I will calculate it by myself, I don't want to bother you so much.

SOLUTION: (from your answers)
For me it's Kabsch alhorithm;
Base info: http://en.wikipedia.org/wiki/Kabsch_algorithm
General solution: http://nghiaho.com/?page_id=671

STILL NOT SOLVED: I also need scale. Scale values from SVD are not understandable for me; when I need scale about 1-4 for all axises (estimated by me), SVD scale is about [2000, 200, 20], which is not helping at all.

Answer

the swine picture the swine · Aug 27, 2015

Since you are already using Kabsch algorithm, just have a look at Umeyama's paper which extends it to get scale. All you need to do is to get the standard deviation of your points and calculate scale as:

(1/sigma^2)*trace(D*S)

where D is the diagonal matrix in SVD decomposition in the rotation estimation and S is either identity matrix or [1 1 -1] diagonal matrix, depending on the sign of determinant of UV (which Kabsch uses to correct reflections into proper rotations). So if you have [2000, 200, 20], multiply the last element by +-1 (depending on the sign of determinant of UV), sum them and divide by the standard deviation of your points to get scale.

You can recycle the following code, which is using the Eigen library:

typedef Eigen::Matrix<double, 3, 1, Eigen::DontAlign> Vector3d_U; // microsoft's 32-bit compiler can't put Eigen::Vector3d inside a std::vector. for other compilers or for 64-bit, feel free to replace this by Eigen::Vector3d

/**
 *  @brief rigidly aligns two sets of poses
 *
 *  This calculates such a relative pose <tt>R, t</tt>, such that:
 *
 *  @code
 *  _TyVector v_pose = R * r_vertices[i] + t;
 *  double f_error = (r_tar_vertices[i] - v_pose).squaredNorm();
 *  @endcode
 *
 *  The sum of squared errors in <tt>f_error</tt> for each <tt>i</tt> is minimized.
 *
 *  @param[in] r_vertices is a set of vertices to be aligned
 *  @param[in] r_tar_vertices is a set of vertices to align to
 *
 *  @return Returns a relative pose that rigidly aligns the two given sets of poses.
 *
 *  @note This requires the two sets of poses to have the corresponding vertices stored under the same index.
 */
static std::pair<Eigen::Matrix3d, Eigen::Vector3d> t_Align_Points(
    const std::vector<Vector3d_U> &r_vertices, const std::vector<Vector3d_U> &r_tar_vertices)
{
    _ASSERTE(r_tar_vertices.size() == r_vertices.size());
    const size_t n = r_vertices.size();

    Eigen::Vector3d v_center_tar3 = Eigen::Vector3d::Zero(), v_center3 = Eigen::Vector3d::Zero();
    for(size_t i = 0; i < n; ++ i) {
        v_center_tar3 += r_tar_vertices[i];
        v_center3 += r_vertices[i];
    }
    v_center_tar3 /= double(n);
    v_center3 /= double(n);
    // calculate centers of positions, potentially extend to 3D

    double f_sd2_tar = 0, f_sd2 = 0; // only one of those is really needed
    Eigen::Matrix3d t_cov = Eigen::Matrix3d::Zero();
    for(size_t i = 0; i < n; ++ i) {
        Eigen::Vector3d v_vert_i_tar = r_tar_vertices[i] - v_center_tar3;
        Eigen::Vector3d v_vert_i = r_vertices[i] - v_center3;
        // get both vertices

        f_sd2 += v_vert_i.squaredNorm();
        f_sd2_tar += v_vert_i_tar.squaredNorm();
        // accumulate squared standard deviation (only one of those is really needed)

        t_cov.noalias() += v_vert_i * v_vert_i_tar.transpose();
        // accumulate covariance
    }
    // calculate the covariance matrix

    Eigen::JacobiSVD<Eigen::Matrix3d> svd(t_cov, Eigen::ComputeFullU | Eigen::ComputeFullV);
    // calculate the SVD

    Eigen::Matrix3d R = svd.matrixV() * svd.matrixU().transpose();
    // compute the rotation

    double f_det = R.determinant();
    Eigen::Vector3d e(1, 1, (f_det < 0)? -1 : 1);
    // calculate determinant of V*U^T to disambiguate rotation sign

    if(f_det < 0)
        R.noalias() = svd.matrixV() * e.asDiagonal() * svd.matrixU().transpose();
    // recompute the rotation part if the determinant was negative

    R = Eigen::Quaterniond(R).normalized().toRotationMatrix();
    // renormalize the rotation (not needed but gives slightly more orthogonal transformations)

    double f_scale = svd.singularValues().dot(e) / f_sd2_tar;
    double f_inv_scale = svd.singularValues().dot(e) / f_sd2; // only one of those is needed
    // calculate the scale

    R *= f_inv_scale;
    // apply scale

    Eigen::Vector3d t = v_center_tar3 - (R * v_center3); // R needs to contain scale here, otherwise the translation is wrong
    // want to align center with ground truth

    return std::make_pair(R, t); // or put it in a single 4x4 matrix if you like
}