I want to match feature points in stereo images. I've already found and extracted the feature points with different algorithms and now I need a good matching. In this case I'm using the FAST algorithms for detection and extraction and the BruteForceMatcher
for matching the feature points.
The matching code:
vector< vector<DMatch> > matches;
//using either FLANN or BruteForce
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(algorithmName);
matcher->knnMatch( descriptors_1, descriptors_2, matches, 1 );
//just some temporarily code to have the right data structure
vector< DMatch > good_matches2;
good_matches2.reserve(matches.size());
for (size_t i = 0; i < matches.size(); ++i)
{
good_matches2.push_back(matches[i][0]);
}
Because there are a lot of false matches I caluclated the min and max distance and remove all matches that are too bad:
//calculation of max and min distances between keypoints
double max_dist = 0; double min_dist = 100;
for( int i = 0; i < descriptors_1.rows; i++ )
{
double dist = good_matches2[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
//find the "good" matches
vector< DMatch > good_matches;
for( int i = 0; i < descriptors_1.rows; i++ )
{
if( good_matches2[i].distance <= 5*min_dist )
{
good_matches.push_back( good_matches2[i]);
}
}
The problem is, that I either get a lot of false matches or only a few right ones (see the images below).
(source: codemax.de)
(source: codemax.de)
I think it's not a problem of programming but more a matching thing. As far as I understood the BruteForceMatcher
only regards the visual distance of feature points (which is stored in the FeatureExtractor
), not the local distance (x&y position), which is in my case important, too. Has anybody any experiences with this problem or a good idea to improve the matching results?
EDIT
I changed the code, that it gives me the 50 best matches. After this I go through the first match to check, whether it's in a specified area. If it's not, I take the next match until I have found a match inside the given area.
vector< vector<DMatch> > matches;
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(algorithmName);
matcher->knnMatch( descriptors_1, descriptors_2, matches, 50 );
//look if the match is inside a defined area of the image
double tresholdDist = 0.25 * sqrt(double(leftImageGrey.size().height*leftImageGrey.size().height + leftImageGrey.size().width*leftImageGrey.size().width));
vector< DMatch > good_matches2;
good_matches2.reserve(matches.size());
for (size_t i = 0; i < matches.size(); ++i)
{
for (int j = 0; j < matches[i].size(); j++)
{
//calculate local distance for each possible match
Point2f from = keypoints_1[matches[i][j].queryIdx].pt;
Point2f to = keypoints_2[matches[i][j].trainIdx].pt;
double dist = sqrt((from.x - to.x) * (from.x - to.x) + (from.y - to.y) * (from.y - to.y));
//save as best match if local distance is in specified area
if (dist < tresholdDist)
{
good_matches2.push_back(matches[i][j]);
j = matches[i].size();
}
}
I think I don't get more matches, but with this I'm able to remove more false matches:
(source: codemax.de)
An alternate method of determining high-quality feature matches is the ratio test proposed by David Lowe in his paper on SIFT (page 20 for an explanation). This test rejects poor matches by computing the ratio between the best and second-best match. If the ratio is below some threshold, the match is discarded as being low-quality.
std::vector<std::vector<cv::DMatch>> matches;
cv::BFMatcher matcher;
matcher.knnMatch(descriptors_1, descriptors_2, matches, 2); // Find two nearest matches
vector<cv::DMatch> good_matches;
for (int i = 0; i < matches.size(); ++i)
{
const float ratio = 0.8; // As in Lowe's paper; can be tuned
if (matches[i][0].distance < ratio * matches[i][1].distance)
{
good_matches.push_back(matches[i][0]);
}
}