I am writing a facial recognition program and I keep getting this error when I try to train my recognizer
TypeError: Expected cv::UMat for argument 'labels'
my code is
def detect_face(img):
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml")
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.2, minNeighbors=5);
if (len(faces)==0):
return None, None
(x, y, w, h) = faces[0]
return gray[y:y+w, x:x+h], faces[0]
def prepare_training_data():
faces = []
labels = []
for img in photo_name_list: #a collection of file locations as strings
image = cv2.imread(img)
face, rect = detect_face(image)
if face is not None:
faces.append(face)
labels.append("me")
return faces, labels
def test_photos():
face_recognizer = cv2.face.LBPHFaceRecognizer_create()
faces, labels = prepare_training_data()
face_recognizer.train(faces, np.ndarray(labels))
labels is list of labels for each photo in the image list returned from prepare_training_data, and I convert it to a numpy array because I read that is what train() needs it to be.
Solution - labels should be list of integers, and you should use numpy.array(labels)
(or np.array(labels)
).
Dummy example to check an error absence:
labels=[0]*len(faces)
face_recognizer.train(faces, np.array(labels))
I haven't found any documentation for openCV face recognizers on python, so I've started to look over c++ documentation and examples. And due to documentation this library uses labels
input for train
as a std::vector<int>
. A cpp example, provided by openCV docs, also uses vector<int> labels
. And so on, library even have an error for not an integer input.