How to calculate % score from ORB algorithm?

user93 picture user93 · Sep 16, 2016 · Viewed 7.7k times · Source

I am using the ORB algorithm of OpenCV 2.4.9 with Python to compare images. The ORB algorithm does not return the similarity score as a percentage. Is there any way to do this?

My code to compare images using ORB is as follows

img1 = cv2.imread("img11.jpg",0) 
img2 = cv2.imread("img2.jpg",0)
# Initiate ORB detector
orb = cv2.ORB()

# find the keypoints and descriptors with ORB
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)

# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING)

matches = bf.knnMatch(des1, trainDescriptors = des2, k = 2)

good = []
for m,n in matches:
    if m.distance < 0.75*n.distance:
        good.append([m])
if len(good) > 20:
   print "similar image"

I did find a solution on Stack Overflow to do this for sift algorithm using Matlab but is there any external library out there that can be easily used with Python to do that with OpenCV?

Answer

bfris picture bfris · May 3, 2018

I don't think keypoint matching lends itself to a percentage score, regardless of whether you use ORB or SIFT.

I think OP was referring to this post which does give hints on how to arrive at a score for each match. The score is the square of the distance of each item in the match pair i.e.

m.distance**2 + n.distance**2

where m and n are from the OP posted code. However, this score bears no resemblance to a percentage. And I'm not sure you're going to find one. The magic number of 0.75 in the OP code is known in some places as the Lowe ratio which was first proposed by Lowe in [D G Lowe, "Distinctive Image Features from Scale-Invariant Keypoints", Intl Journal of Computer Vision 60(2), 91-110, 2004]. It's as good a figure of merit as any, but needs to be adjusted according to the keypoint detection algorithm ( e.g. ORB, SIFT, etc). To determine whether you've found a good match, it's common to tweak the Lowe ratio and then count the number of good matches. The Homography tutorial (for OpenCV 2.4 or 3.4.1) is a good example of this

I'm using OpenCV 3.4 and ORB does return values, just not as many as SIFT. Using the tutorial images "box.png" and "box_in_scene.png", I get 79 "good" matches with SIFT and 7(!) "good" matches with ORB.

However, if I crank up the magic number 0.75 to 0.89 for ORB, I get 79 "good" matches.

Full code using Python 3.4.4 and OpenCV 3.4. Syntax and operation should be very similar for OpenCV 2.4.9:

# This time, we will use BFMatcher.knnMatch() to get k best matches. 
# In this example, we will take k=2 so that we can apply ratio test 
# explained by D.Lowe in his paper. 

import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
img1 = cv.imread('box.png',0)          # queryImage
img2 = cv.imread('box_in_scene.png',0) # trainImage

method = 'ORB'  # 'SIFT'
lowe_ratio = 0.89

if method   == 'ORB':
    finder = cv.ORB_create()
elif method == 'SIFT':
    finder = cv.xfeatures2d.SIFT_create()

# find the keypoints and descriptors with SIFT
kp1, des1 = finder.detectAndCompute(img1,None)
kp2, des2 = finder.detectAndCompute(img2,None)

# BFMatcher with default params
bf = cv.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)

# Apply ratio test
good = []

for m,n in matches:
    if m.distance < lowe_ratio*n.distance:
        good.append([m])

msg1 = 'using %s with lowe_ratio %.2f' % (method, lowe_ratio)
msg2 = 'there are %d good matches' % (len(good))

img3 = cv.drawMatchesKnn(img1,kp1,img2,kp2,good, None, flags=2)

font = cv.FONT_HERSHEY_SIMPLEX
cv.putText(img3,msg1,(10, 250), font, 0.5,(255,255,255),1,cv.LINE_AA)
cv.putText(img3,msg2,(10, 270), font, 0.5,(255,255,255),1,cv.LINE_AA)
fname = 'output_%s_%.2f.png' % (method, magic_number)
cv.imwrite(fname, img3)

plt.imshow(img3),plt.show()

Using these images for input:

enter image description here enter image description here

I get these results: enter image description here enter image description here

However, it's worth noting that ORB gives many more bogus matches that are off the Bastoncini box.