How to get points coordinate position in the face landmark detection program of dlib?

randy Pen picture randy Pen · Sep 30, 2016 · Viewed 17.4k times · Source

There is one example python program in dlib to detect the face landmark position. face_landmark_detection.py

This program detect the face feature and denote the landmarks with dots and lines in original photo.

I wonder if it is possible to obtain each points' coordinate position. Like a(10, 25). 'a' denotes corner of the mouth.

After slightly modifying the program to process one picture at one time, I try to print out the value of dets and shape without success.

>>>print(dets)
<dlib.dlib.rectangles object at 0x7f3eb74bf950>
>>>print(dets[0])
[(1005, 563) (1129, 687)]

The arguments to denote face landmark points and the datatype of arguments still remain unknown. And here is the simplified code

import dlib
from skimage import io

#shape_predictor_68_face_landmarks.dat is the train dataset in the same directory
predictor_path = "shape_predictor_68_face_landmarks.dat"

detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(predictor_path)
win = dlib.image_window()

#FDT.jpg is the picture file to be processed in the same directory
img = io.imread("FDT.jpg")

win.set_image(img)

dets = detector(img)

print("Number of faces detected: {}".format(len(dets)))
for k, d in enumerate(dets):
    print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
        k, d.left(), d.top(), d.right(), d.bottom()))
    # Get the landmarks/parts for the face in box d.
    shape = predictor(img, d)
    #print(shape)
    print("Part 0: {}, Part 1: {} ...".format(shape.part(0),
                                              shape.part(1)))
# Draw the face landmarks on the screen.
win.add_overlay(shape)

win.add_overlay(dets)
dlib.hit_enter_to_continue()

---------------------------update on 3.10.2016---------------------------

Today, I remember the help() method in python and have a trial with it.

>>>help(predictor)

Help on shape_predictor in module dlib.dlib object:

class shape_predictor(Boost.Python.instance)
 |  This object is a tool that takes in an image region containing 
some object and outputs a set of point locations that define the pose 
of the object. The classic example of this is human face pose 
prediction, where you take an image of a human face as input and are
expected to identify the locations of important facial landmarks such
as the corners of the mouth and eyes, tip of the nose, and so forth.

In the original code, variable shape is the output of predictor method.

>>>help(shape)

The description of shape

class full_object_detection(Boost.Python.instance)
 |  This object represents the location of an object in an image along 
with the positions of each of its constituent parts.
----------------------------------------------------------------------
 |  Data descriptors defined here:
 |  
 |  num_parts
 |      The number of parts of the object.
 |  
 |  rect
 |      The bounding box of the parts.
 |  
 |  ----------------------------------------------------------------------

It seems that variable shape is related with points coordinate position.

>>>print(shape.num_parts)
68
>>>print(shape.rect)
[(1005, 563) (1129, 687)]

I assume that there are 68 denoted face landmark points.

>>> print(shape.part(68))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
IndexError: Index out of range
>>> print(shape.part(65))
(1072, 645)
>>> print(shape.part(66))
(1065, 647)
>>> print(shape.part(67))
(1059, 646)

If it is true. The remained problem is that which part is responding to which face landmark point.

Answer

randy Pen picture randy Pen · Oct 4, 2016

I slightly modified the code.

import dlib
import numpy as np
from skimage import io

predictor_path = "shape_predictor_68_face_landmarks.dat"

detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(predictor_path)

img = io.imread("FDT.jpg")

dets = detector(img)

#output face landmark points inside retangle
#shape is points datatype
#http://dlib.net/python/#dlib.point
for k, d in enumerate(dets):
    shape = predictor(img, d)

vec = np.empty([68, 2], dtype = int)
for b in range(68):
    vec[b][0] = shape.part(b).x
    vec[b][1] = shape.part(b).y

print(vec)

Here is the output

[[1003  575]
 [1005  593]
 [1009  611]
 [1014  627]
 [1021  642]
 [1030  655]
 [1041  667]
 [1054  675]
 [1069  677]
 [1083  673]
 [1095  664]
 [1105  651]
 [1113  636]
 [1120  621]
 [1123  604]
 [1124  585]
 [1124  567]
 [1010  574]
 [1020  570]
 [1031  571]
 [1042  574]
 [1053  578]
 [1070  577]
 [1081  572]
 [1092  568]
 [1104  566]
 [1114  569]
 [1063  589]
 [1063  601]
 [1063  613]
 [1063  624]
 [1050  628]
 [1056  630]
 [1064  632]
 [1071  630]
 [1077  627]
 [1024  587]
 [1032  587]
 [1040  586]
 [1048  588]
 [1040  590]
 [1031  590]
 [1078  587]
 [1085  585]
 [1093  584]
 [1101  584]
 [1094  588]
 [1086  588]
 [1045  644]
 [1052  641]
 [1058  640]
 [1064  641]
 [1070  639]
 [1078  640]
 [1086  641]
 [1080  651]
 [1073  655]
 [1066  656]
 [1059  656]
 [1052  652]
 [1048  645]
 [1059  645]
 [1065  646]
 [1071  644]
 [1083  642]
 [1072  645]
 [1065  647]
 [1059  646]]

And there is another open source project OpenFace, which is based on dlib and describes each point's correlating part in face.

The url of describing image