Python TypeError: UMat() missing required argument 'ranges' (pos 2)

Tyler Strouth picture Tyler Strouth · Jan 21, 2019 · Viewed 11.6k times · Source

I am writing a facial recognition program and I keep getting this error, and I am just very confused I see no other examples on the web where people include ranges when converting to UMat

    Traceback (most recent call last):
  File "test.py", line 48, in <module>
    test_photos()
  File "test.py", line 40, in test_photos
    face, rect = detect_face(test_photo)
  File "test.py", line 15, in detect_face
    imgUMat = cv2.UMat(img)
TypeError: UMat() missing required argument 'ranges' (pos 2)

my code is

def detect_face(img):   
    imgUMat = cv2.UMat(img)
    gray = cv2.cvtColor(imgUMat, cv2.COLOR_BGR2GRAY)
    face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml")
    faces = face_cascade.detectMultiScale(gray, scaleFactor=1.2, minNeighbors=5)
    if (len(faces)==0):
        return None, None
    (x, y, w, h) = faces[0]
    gray = gray.get()
    return gray[y:y+h,x:x+w], faces[0]

def prepare_training_data():
    faces = []
    labels = []
    for img in photo_name_list: #a collection of file locations as strings
        image = cv2.imread(img)
        face, rect = detect_face(image)
        if face is not None:
            faces.append(face)
            labels.append(me)
    return faces, labels

def test_photos():
    face_recognizer = cv2.face.LBPHFaceRecognizer_create()
    faces, labels = prepare_training_data()
    face_recognizer.train(np.array(faces), np.array(labels))
    face, rect = detect_face(test_photo)
    label = face_recognizer.predict(face)
    if label == me:
        print("it's me")
    else:
        print("it's not me")


test_photos()

if I do not use UMat() then I get this error:

Traceback (most recent call last):
  File "test.py", line 48, in <module>
    test_photos()
  File "test.py", line 40, in test_photos
    face, rect = detect_face(test_photo)
  File "test.py", line 16, in detect_face
    gray = cv2.cvtColor(imgUMat, cv2.COLOR_BGR2GRAY)
TypeError: Expected cv::UMat for argument 'src'

I am using OpenCV 4.0.0, and to be honest I just very confused because from what I have seen no one else had to use UMat to use cvtColor(), let alone use ranges inside UMat(). Any help would be greatly appreciated.

Answer

Varun Mathur picture Varun Mathur · Apr 23, 2019

Instead of converting to a UMat using cv2.Umat(), just pass it in a np.float32(). The two are identical for all intents and purposes.

Your code would look like this:

def detect_face(img):   
    imgUMat = np.float32(img)
    gray = cv2.cvtColor(imgUMat, cv2.COLOR_BGR2GRAY)