Face Encodings from Image
We can find the face encoding values from face in image. This method should return 128 values per face from image. This is important to identify the face
My Configuation:
Language: Python2 or later
Library: dlib
Os: Ubuntu 16.04
Defination for dlib classes and methods:
dlib.get_frontal_face_detector
() → dlib::object_detector<dlib::scan_fhog_pyramid<dlib::pyramid_down<6u>, dlib::default_fhog_feature_extractor> >
Returns the default face detector
class dlib.face_recognition_model_v1
This object maps human faces into 128D vectors where pictures of the same person are mapped near to each other and pictures of different people are mapped far apart. The constructor loads the face recognition model from a file. The model file is available here: http://dlib.net/files/dlib_face_recognition_resnet_model_v1.dat.bz2
__init__
(self: dlib.face_recognition_model_v1, arg0: unicode) → Nonecompute_face_descriptor
(*args, **kwargs)
Overloaded function.
- compute_face_descriptor(self: dlib.face_recognition_model_v1, img: numpy.ndarray[(rows,cols,3),uint8], face: dlib.full_object_detection, num_jitters: int=0L) -> dlib.vector
(Takes an image and a full_object_detection that references a face in that image and converts it into a 128D face descriptor. If num_jitters>1 then each face will be randomly jittered slightly num_jitters times, each run through the 128D projection, and the average used as the face descriptor.) - compute_face_descriptor(self: dlib.face_recognition_model_v1, img: numpy.ndarray[(rows,cols,3),uint8], faces: std::vector<dlib::full_object_detection, std::allocator<dlib::full_object_detection> >, num_jitters: int=0L) -> dlib.vectors
(Takes an image and an array of full_object_detections that reference faces in that image and converts them into 128D face descriptors. If num_jitters>1 then each face will be randomly jittered slightly num_jitters times, each run through the 128D projection, and the average used as the face descriptor.)
For more details please visit dlib home page
Code Details:
Load the dlib model files:
# Models Loadedface_detector = dlib.get_frontal_face_detector()pose_predictor_68_point = dlib.shape_predictor('../models/shape_predictor_68_face_landmarks.dat')face_encoder = dlib.face_recognition_model_v1('../models/dlib_face_recognition_resnet_model_v1.dat')
Face Detectors from Image:
def whirldata_face_detectors(img, number_of_times_to_upsample=1):
return face_detector(img, number_of_times_to_upsample)
Note:
(i):param number_of_times_to_upsample: How many times to upsample the image looking for faces. Higher numbers find smaller faces.
(ii):return: A list of dlib ‘rect’ objects of found face locations
Find the Face Encodings from Image:
def whirldata_face_encodings(face_image,num_jitters=1):
face_locations = whirldata_face_detectors(face_image)
pose_predictor = pose_predictor_68_point
predictors = [pose_predictor(face_image, face_location) for face_location in face_locations]
return [np.array(face_encoder.compute_face_descriptor(face_image, predictor, num_jitters)) for predictor in predictors][0]
Note:
(i):param num_jitters: How many times to re-sample the face when calculating encoding. Higher is more accurate, but slower (i.e. 100 is 100x slower)
(ii):return: A list of 128-dimensional face encodings (one for each face in the image)
Full Source Code:
import dlib
import numpy as np
from scipy import misc# Models Loadedface_detector = dlib.get_frontal_face_detector()pose_predictor_68_point = dlib.shape_predictor('../models/shape_predictor_68_face_landmarks.dat')face_encoder = dlib.face_recognition_model_v1('../models/dlib_face_recognition_resnet_model_v1.dat')def whirldata_face_detectors(img, number_of_times_to_upsample=1):
return face_detector(img, number_of_times_to_upsample)def whirldata_face_encodings(face_image,num_jitters=1):
face_locations = whirldata_face_detectors(face_image)
pose_predictor = pose_predictor_68_point
predictors = [pose_predictor(face_image, face_location) for face_location in face_locations]
return [np.array(face_encoder.compute_face_descriptor(face_image, predictor, num_jitters)) for predictor in predictors][0]known_image = misc.imread("ano.jpeg")print(known_image.shape)
enc = whirldata_face_encodings(known_image)print(enc)
OUTPUT: