I'm trying to use the ORB keypoint detector and it seems to be returning much fewer points than the SIFT detector and the FAST detector.
This image shows the keypoints found by the ORB detector:
and this image shows the keypoints found by the SIFT detection stage (FAST returns a similar number of points).
Having such few points is resulting in very poor feature matching results across images. I'm just curious about the detection stage of ORB right now though because this seems like I'm getting incorrect results. I've tried using the ORB detector with default parameters and also custom parameters detailed below as well.
Why such a big difference?
Code:
orb = cv2.ORB_create(edgeThreshold=15, patchSize=31, nlevels=8, fastThreshold=20, scaleFactor=1.2, WTA_K=2,scoreType=cv2.ORB_HARRIS_SCORE, firstLevel=0, nfeatures=500)
#orb = cv2.ORB_create()
kp2 = orb.detect(img2)
img2_kp = cv2.drawKeypoints(img2, kp2, None, color=(0,255,0), \
flags=cv2.DrawMatchesFlags_DEFAULT)
plt.figure()
plt.imshow(img2_kp)
plt.show()
Increasing nfeatures increases the number of detected corners. The type of keypoint extractor seems irrelevant. I'm not sure how this parameter is passed to FAST or Harris but it seems to work.
orb = cv2.ORB_create(scoreType=cv2.ORB_FAST_SCORE)
orb = cv2.ORB_create(nfeatures=100000, scoreType=cv2.ORB_FAST_SCORE)