How to use OpenCV cornerSubPix() in Python?

Carlos picture Carlos · Nov 7, 2016 · Viewed 9.7k times · Source

I'm trying to get and paint corner points in an image. Now, I have a list of tuples with the following format: (row,column,scale) (scale is because I'm using a Gaussian Pyramid), obtained from harrisCornerDetector and nonMaximumSupression process manually. This list is featuresy1.

My code is the following:

r,g,b=cv2.split(image)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
cv2.cornerSubPix( r, featuresy1, (5,5), (-1,1), criteria )

Where image is an image in grayscale with three identical shapes. As you can see, I'm giving to cornerSubPix as second parameter a structure like this: [(x1,y1,scale1),(x2,y2,scale2),...,(xn,yn,scalen)].

This is throwing the following error:

cv2.cornerSubPix( r, featuresy1, (5,5), (-1,1), criteria )
TypeError: corners is not a numpy array, neither a scalar

For this reason I wonder what type, format or structure should have featuresy1 to do cornerSubPix() working. Is this the only thing I'm doing wrong? There isn't much documentation about this.

Thanks!

Answer

Yonatan Simson picture Yonatan Simson · May 9, 2017

You need to make sure corners is numpy array with 3 dimensions (n, 1, 2), where n is the number of corners. It also has to be a type float32.

Just type

corners.shape

to verify this.

Type in

corners.dtype

To check you are using float32.

To me is looks like your corners featuresy1 is list where it should be a numpy array. Convert it first to a numpy array:

featuresy1 = np.array(featuresy1)

Opencv has an easy to understand example that might help you