Home » guide » Facial Detection and Recognition with OpenCV and AWS Rekognition

Facial Detection and Recognition with OpenCV and AWS Rekognition

Facial recognition for authentication (e.g. unlocking the door) is deemed not secure as many implementations are easily fooled. However, it is acceptable to use facial recognition for monitoring. In the future, I’d like to monitor movement at the front door for 2 reasons: detecting suspicious character and detecting if family member (or friend) is at the door (e.g. presence detection).

Couple of months ago, AWS announced Amazon Rekognition – Image-Recognition-as-a-service. I’m interested to try it out and so decided to spent couple of hours putting up a project. The video camera is always on, it will detect passerby and try to do a facial match against a database of known faces.

As image processing is often compute or memory expensive, we ensure that there’s significant change before capturing the image for processing. As we don’t want to waste time too frequently queuing Amazon Rekognition, we do a facial detection. In short, if the scene has changed (e.g. someone or something appears) and human face is detected, we then do a facial recognition.

The following setup have been tested on Raspian Jessie and Python 2.7, together with a low resolution Gsou webcam I acquired many years ago.

Step 1: Detecting change

The simplest way to do this is by measuring the magnitude of difference between the previous and present frame. We can quickly obtain this by computing the Mean Squared Error (MSE).

import numpy as np

def image_diff(imgA, imgB):
    mse = np.sum((imgA.astype("float") - imgB.astype("float")) ** 2)
    mse /= float(imgA.shape[0] * imgA.shape[1])
    if mse>10000:
        return True
        return False

The function accepts 2 images as inputs, it compares pixel by pixel and square the differences, and returns True if the MSE exceed a defined threshold – indicating that there’s a significant difference.

Step 2: Facial Detection

It’s easy for human to determine if there’s any human face in an image but to a machine doing the same job, it is not as straight forward. Computer vision works by learning from sample input images (training set) via various algorithms and the output is the classifier (e.g. feature map). The model is then used to evaluate new images (which usually isn’t part of the training set). There’s plenty of calculations to be done, but we will avoid them by using libraries.

OpenCV is likely the most popular library for computer vision. Fortunately, there’s is Python bindings to access the C/C++ libraries. We will use the Haar feature-based cascade classifiers for facial detection.

You’ll need to install OpenCV, and ensure the environment is properly setup if you are using Python virtualenv. Refer to this guide if you are setting up on Raspbian.

def face_detect(img):
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray, 1.3, 5)
    for (x,y,w,h) in faces:
    if len(faces)>0:
        return True
        return False

The function accepts an image as input. It then determine if there’s any human face within. If yes, draw a rectangular box around the face(s) and Returns True.

Step 3: Training the classifier (AWS Rekognition)

Here, we need to train the classifier with our input images. Prepare a list of images which you’d like the system to “learn” and give it proper naming so that it’s easily identifiable. Upload the images to AWS S3 bucket. You can do this via the AWS management console.

You’ll need to have AWS CLI installed and configured with the necessary credentials.

Create a facial collection with id “profile_photos”, or you can name it however you like. Run this on the terminal.

aws rekognition create-collection \
 --collection-id "profile_photos"

Next, run the following command for each image you have. Ensure that you are referring to the correct collection and s3 location.

aws rekognition index-faces \
 --collection-id "profile_photos" \
 --image '{"S3Object":{"Bucket":"bladebucket","Name":"profile_photos/calvin_01.jpg"}}' \
 --external-image-id "calvin_01.jpg"

Step 4: Facial Recognition

Test searching faces by providing an input image. Use a new image that isn’t part of the training set.

aws rekognition search-faces-by-image \
 --collection-id "profile_photos" \
 --image '{"S3Object":{"Bucket":"bladebucket","Name":"profile_photos/test.jpg"}}’

Now, we will use the AWS SDK for Python. Ensure that you have the boto3 library.

import boto3
import re

def face_recog(img):
    cv2.imwrite('/tmp/face_recog.jpg', img)

    with open("/tmp/face_recog.jpg", "rb") as imageFile:
        f = imageFile.read()
        buf = bytearray(f)

    client = boto3.client('rekognition')
    response = client.search_faces_by_image(
            'Bytes': buf
    if len(response['FaceMatches']) == 0:
        return False
        res = response['FaceMatches'][0]['Face']['ExternalImageId']
        matchObj = re.match( r'(.*)_([0-9]+).jpg', res, re.M|re.I)
        if matchObj:
            return matchObj.group(1)
            return res

Moving forward

In the near future, I plan to further improve the system in various ways described below.

  1. Recognising faces of my friends by learning from Facebook profile (i.e. crawling profile pictures).
  2. Maybe: Use openCV completely, even for the facial recognition in order to save cost.
  3. Push notification to my mobile phone when somebody’s at the front door and I’m not at home.



3 Responsesso far.

  1. Anas says:

    Hi Calvin,

    Great article, thanks! I would like to speak with you about your learning from the face recognition project.

    Please let me know if you can be available on skype/hangount sometime. Looking forward to it.

    Best regards,

  2. Edwin KEstler says:

    Great Article. Thanks for making it public,

Leave a Reply

Your email address will not be published. Required fields are marked *