Face recognition is a technology that enables the identification and verification of individuals by analyzing and matching their facial features. It utilizes artificial intelligence (AI) and computer vision techniques to extract distinct facial characteristics, such as the distance between the eyes, the shape of the nose, and the contour of the jawline. These unique attributes are then converted into mathematical representations known as face templates, which serve as the basis for comparison and identification.
Face recognition involves matching and verifying a detected face against a database to identify the individual while face detection is the process of locating and identifying faces within an image or video frame.
Let's walk through a step-by-step process of implementing face recognition using OpenCV.
Note: You can first read the detail of how faces are recognized here.
First of all, import the necessary libraries:
import cv2import face_recognitionimport randomimport numpy as np
We import necessary libraries like
cv2
for working with images
face_recognition
to detect and encode faces
random
is used for selecting a random image for recognition
numpy
for
We begin by loading images of known individuals and encoding their facial features. For this, we need to have images of the people and their names, which should be recognized. Following is the reference gallery that we will be using in our code.
The OpenCV library, coupled with face_recognition, helps us in detecting and encoding faces. We create a list of known face names and iterate through each, loading the respective image. If a face is detected, its encoding is calculated and added to a variable list. e.g. known_face_encodings
.
# Load images and encode known facesknown_face_names = ["Emma", "Laura","Abigail","Sophia","Amelia","Nora","Alice","Luna"] # Include names for known facesknown_face_encodings = []for name in known_face_names:image_path = f'Gallery/{name}.png'known_image = face_recognition.load_image_file(image_path)known_face_locations = face_recognition.face_locations(known_image)if known_face_locations: # Check if any face is foundknown_encoding = face_recognition.face_encodings(known_image, known_face_locations)[0]known_face_encodings.append(known_encoding)else:print(f"No face found in the image: {name}")
We initialize a list with all the names and then process each name in the list. We construct the path to the image file for each name in the "Gallery" folder, naming it based on the person. We then load this image using the face_recognition
library.
Next, we locate the face within the loaded image using the face_recognition
library's face_locations
function. If a face is detected in the image, we proceed to encode the face using the face_encodings
function, which captures the facial features that make each individual unique. We store this encoding in our collection of known_face_encodings
.
In cases where no face is detected in the image, we display a message indicating that no face was found for the particular person.
Next, we prepare an unknown image for recognition. We specify the path to input image and load it using OpenCV. The face_recognition
library helps locate the faces within the image and compute their encodings.
# Load the image for recognitionrandom_face_name = random.choice(known_face_names)image_path = f'Gallery/{random_face_name}.png'unknown_image = face_recognition.load_image_file(image_path)unknown_face_locations = face_recognition.face_locations(unknown_image)unknown_face_encodings = face_recognition.face_encodings(unknown_image, unknown_face_locations)# Converting to BGRcv2_image = cv2.cvtColor(unknown_image, cv2.COLOR_RGB2BGR)
In the code above, we choose a random face from the same gallery using the random.choice
function. Using the face_locations
function, we identify the locations of any faces present within the image. With the face locations determined, we proceed to encode the unknown face using the face_encodings
function from the face_recognition
library. Additionally, we convert the color format of the input image from RGB to BGR using the cv2
(OpenCV) library.
The core of our face recognition process involves comparing the unknown face encodings with the known ones. We utilize the compare_faces
function to determine matches. If a match is found, we calculate face distances to identify the best match and assign the corresponding name. This step allows us to recognize the individual in the unknown image.
# Loop through each face in the unknown imagefor (top, right, bottom, left), face_encoding in zip(unknown_face_locations, unknown_face_encodings):# Compare face with known facesmatches = face_recognition.compare_faces(known_face_encodings, face_encoding)name = "Unknown"if any(matches):match_scores = [face_recognition.face_distance(known_face_encodings, face_encoding)]best_match_index = np.argmin(match_scores)name = known_face_names[best_match_index]
In the code above, we iterate through the coordinates and encodings of each detected face in the input image. The coordinates include information about the face's position within the image (top, right, bottom, left), and the face_encoding
captures the unique facial features.
We compare the detected face's encoding with the encodings of our known faces. The face_recognition
library's compare_faces
function assists us in this comparison. If a match is found (i.e., if the detected face is similar to a known face), we proceed to determine the name of the recognized individual.
Initially, we assign the name "Unknown"
to the detected face. However, if a match is found, we calculate the match scores between the detected face and the known faces using the face_recognition
library's face_distance
function. This helps us identify the best match among the known faces.
The index of the best match is determined using NumPy's argmin
function, and we extract the corresponding name from our list of known_face_names
.
To visualize the recognition results, we employ OpenCV's drawing functions. We draw rectangles around the detected faces and label them with their respective names. The chosen font and text size enhance the clarity of the labels. The final result is an image with recognized faces highlighted and labeled.
# Draw rectangle and labelcv2.rectangle(cv2_image, (left, top), (right, bottom), (0, 255, 0), 2)font = cv2.FONT_HERSHEY_SIMPLEXtext_size = cv2.getTextSize(name, font, 3, 5)[0]cv2.rectangle(cv2_image, (left, bottom + text_size[1] + 30), (left + text_size[0] + 10, bottom), (173, 216, 230), cv2.FILLED)cv2.putText(cv2_image, name, (left + 6, bottom + 80), font, 3, (2, 48, 32), 5)
Using the OpenCV library, we draw a green rectangle around the detected face. This rectangle is defined by the coordinates (left, top) and (right, bottom), with a thickness of 2 pixels.
To label the recognized face, we employ the font FONT_HERSHEY_SIMPLEX
and calculate the size of the text using the cv2.getTextSize
function. The calculated text size helps us position the label accurately.
Next, we draw a filled light blue rectangle below the face's bounding box. This rectangle provides a background for the text label. The coordinates of this rectangle are determined to ensure proper alignment and readability.
Finally, we place the recognized person's name on the image using the cv2.putText
function. The name is positioned within the blue rectangle, slightly indented from the left side and a bit below the face's bounding box. The text is displayed in a shade of greenish-blue, with a font size of 3 and a thickness of 5 pixels.
Finally, we display the output image showcasing the recognized faces. This output image serves as a visual representation of our face recognition process. Additionally, we save the output image for future reference or integration into larger applications.
# Display the resultcv2.imwrite('output/recognizedFace.png', cv2_image)
Here is the complete code of face recognition by combining all the above snippets. It reads the images with the names provided, takes an image randomly, and recognizes the name of the person.
import cv2import face_recognitionimport randomimport numpy as np# Load images and encode known facesknown_face_names = ["Emma", "Laura","Abigail","Sophia","Amelia","Nora","Alice","Luna"] # Include names for known facesknown_face_encodings = []for name in known_face_names:image_path = f'Gallery/{name}.png'known_image = face_recognition.load_image_file(image_path)known_face_locations = face_recognition.face_locations(known_image)if known_face_locations: # Check if any face is foundknown_encoding = face_recognition.face_encodings(known_image, known_face_locations)[0]known_face_encodings.append(known_encoding)else:print(f"No face found in the image: {name}")# Load the image for recognitionrandom_face_name = random.choice(known_face_names)image_path = f'Gallery/{random_face_name}.png'unknown_image = face_recognition.load_image_file(image_path)unknown_face_locations = face_recognition.face_locations(unknown_image)unknown_face_encodings = face_recognition.face_encodings(unknown_image, unknown_face_locations)# Converting to BGRcv2_image = cv2.cvtColor(unknown_image, cv2.COLOR_RGB2BGR)# Loop through each face in the unknown imagefor (top, right, bottom, left), face_encoding in zip(unknown_face_locations, unknown_face_encodings):# Compare face with known facesmatches = face_recognition.compare_faces(known_face_encodings, face_encoding)name = "Unknown"if any(matches):match_scores = [face_recognition.face_distance(known_face_encodings, face_encoding)]best_match_index = np.argmin(match_scores)name = known_face_names[best_match_index]# Draw rectangle and labelcv2.rectangle(cv2_image, (left, top), (right, bottom), (0, 255, 0), 2)font = cv2.FONT_HERSHEY_SIMPLEXtext_size = cv2.getTextSize(name, font, 3, 5)[0]cv2.rectangle(cv2_image, (left, bottom + text_size[1] + 30), (left + text_size[0] + 10, bottom), (173, 216, 230), cv2.FILLED)cv2.putText(cv2_image, name, (left + 6, bottom + 80), font, 3, (2, 48, 32), 5)# Display the resultcv2.imwrite('output/recognizedFace.png', cv2_image)