LOADING

Type to search

Azure Cloud Face Recognition Python

Detect Faces Using the Microsoft Azure Face API With Python

Share

Need to build your own facial recognition system for an application? Don’t do it! There are already robust and affordable options in the cloud backed by their own massively scaled machine learning. In this article we will learn how to detect faces using the Azure Face API using the Python SDK. We will use pictures from the cast of Friends to initialize a small data set of faces. We’ll be able to take a new image and match it to one of the cast members. You can use the same logic in this tutorial in other facial recognition applications, such as attendance management, visitor management, access control, etc.

How Advanced is this Tutorial?

This tutorial allows you to quickly start exploring and developing applications with the Microsoft Azure Face API. People familiar with basic programming should be able to understand this tutorial. You should be able to follow along even without much programming knowledge. Once completed, you should be able to use the Reference documentation to create your own basic applications provided here.

However, if you prefer Typescript over Python, check out our Typescript Face Detection post.

This tutorial steps through a Face API application using Python code. The purpose here is not to explain the Python client libraries, but to explain how to make calls to the Face API.

Tutorial Prerequisites

You will need the following before moving forward:

  • A Face API subscription key. You can get a free trial subscription key from Try Cognitive Services in Microsoft Azure. Or, follow the instructions in Create a Cognitive Services account on Azure to subscribe to the Face API service and get your own key.
  • You’ve set up your required environment using Application Default Credentials.
  • You have basic familiarity with Python programming. Don’t worry; everything is pretty much self-explanatory.
  • You have set up your Python environment on your system. We recommend you have the latest version of Python, pip, and virtualenv downloaded and installed on your system. We have used pycharm IDE for the code development.

Quick Code – Detect Faces With the Face API

Before defining our project, we will get our feet wet by starting with the simplest API call in the bunch: Face/Detect. To clarify, you will use this API call to detect faces with the Face API.

# import modules
import request

# Request headers set Subscription key which provides access to this API. Found in
your Cognitive Services accounts.
headers = {
    'Content-Type': 'application/json',
    'Ocp-Apim-Subscription-Key': 'XXXad1b1f6564XXXXed71a61f7XXXb44',
}

body = dict()
body["url"] = "http://www.imagozone.com/var/albums/vedete/Matthew%20Perry/Matthew%20Perry.jpg?m=1355670659"

body = str(body)

# Request URL
FaceApiDetect = 'https://westus.api.cognitive.microsoft.com/face/v1.0/detect?returnFaceId=true&returnFaceAttributes=age,gender,headPose,smile,facialHair'

try:
    # REST Call
    response = requests.post(FaceApiDetect, data=body, headers=headers)
    print("RESPONSE:" + str(response.json()))
except Exception as e:
    print(e)
Friend Matthew Perry

A successful call returns an array of face entries ranked by face rectangle size in descending order. An empty response indicates no faces detected. A face entry may contain the following values depending on input parameters: faceId, faceRectangle, faceAttributes, etc. Matthew Perry AKA Chandler seems to be smiling in this photo, API has got that right but I not so sure it guessed his age right.

A typical response from Face/Detect is as follows:

[{
    'faceId': '2bed5571-71e8-4930-9fae-b2ba46db077d',
    'faceRectangle': {
        'top': 335,
        'left': 249,
        'width': 319,
        'height': 319
    },
    'faceAttributes': {
        'smile': 1.0,
        'headPose': {
            'pitch': 0.0,
            'roll': -3.7,
            'yaw': -2.6
        },
        'gender': 'male',
        'age': 43.0,
        'facialHair': {
        'moustache': 0.1,
        'beard': 0.1,
        'sideburns': 0.1
        }
    }
}]

Let’s Detect Some Faces!

The first step is to create a database of face images of all our friend’s characters and train a model, which we will use later to identify our friends characters from any given face image.

Step 1: Create a Person Group Using Face API – PersonGroup/Create

Firstly, create a new PersonGroup with specified personGroupId, name, and user-provided userData. A PersonGroup is the container of the uploaded person data, including face images and face recognition features. You can find more about person group and other container options available at the Cognitive Services API Documentation page. Let’s create a person group with the id “friends”.

personGroupId="friends"

body = dict()
body["name"] = "F.R.I.E.N.D.S"
body["userData"] = "All friends cast"
body = str(body)

#Request URL
FaceApiCreateLargePersonGroup =
'https://westus.api.cognitive.microsoft.com/face/v1.0/persongroups/'+personGroupId

try:
    # REST Call
    response = requests.put(FaceApiCreateLargePersonGroup, data=body,
    headers=headers)
    print("RESPONSE:" + str(response.status_code))
except Exception as e:
    print(e)

Step 2: Create a Person Using Face API

Secondly, create a new person in a specified PersonGroup. This will return a personId which we will use in the next step to add training face images to this person. We can also save some additional metadata for each person like a name. We will do Chandler first because he is, after all, everyone’s favorite character.

# Request Body
body = dict()
body["name"] = "Chandler"
body["userData"] = "Friends"
body = str(body)

# Request URL
FaceApiCreatePerson = 'https://westus.api.cognitive.microsoft.com/face/v1.0/persongroups/' + personGroupId + '/persons'

try:
    # REST Call
    response = requests.post(FaceApiCreatePerson, data=body, headers=headers)
    responseJson = response.json()
    personId = responseJson["personId"]
    print("PERSONID: " + str(personId))
except Exception as e:
    print(e)

Step 3: Add Face Images to the Person – PersonGroup/Person/Add Face

Thirdly, add a face image to a person into a PersonGroup for face identification or verification. To deal with the image of multiple faces, the input face can be specified as an image with a targetFace rectangle. It returns a persistedFaceId representing the added face. The extracted face feature, instead of the actual image, will be stored on the server until PersonGroup/PersonFace/ Delete, PersonGroup/Person /Delete or PersonGroup/Delete is called.

# 5 random images of chandler
chandlerImageList =
["http://www.imagozone.com/var/albums/vedete/Matthew%20Perry/Matthew%20Perry.jpg?m=1355670659",
"https://i.pinimg.com/236x/b0/57/ff/b057ff0d16bd5205e4d3142e10f03394--muriel-
matthew-perry.jpg",
"https://qph.fs.quoracdn.net/main-qimg-
74677a162a39c79d6a9aa2b11cc195b1",
"https://pbs.twimg.com/profile_images/2991381736/e2160154f215a325b0fc73f866039311_4
00x400.jpeg",
"https://i.pinimg.com/236x/f2/9f/45/f29f45049768ddf5c5d75ff37ffbfb3f--hottest-
actors-matthew-perry.jpg"]

# Request URL
FaceApiCreatePerson =
'https://westus.api.cognitive.microsoft.com/face/v1.0/persongroups/' +
personGroupId + '/persons/' + personId + '/persistedFaces'

for image in chandlerImageList:
    body = dict()
    body["url"] = image
    body = str(body)

try:
    # REST Call
    response = requests.post(FaceApiCreatePerson, data=body, headers=headers)
    responseJson = response.json()
    persistedFaceId = responseJson["persistedFaceId"]
    print("PERSISTED FACE ID: " + str(persistedFaceId))
except Exception as e:
    print(e)

Note: Repeat the step 2 and 3 to add face images of the remaining characters.

Step 4: Train the Model: PersonGroup/Train

Since training is a crucial step, we will submit a person group training task. As a note, you must conduct training before using a given PersonGroup in the Face/Identify API endpoint.

Since the training task is an asynchronous task, and training time can be time intensive, execution could be several seconds to minutes. In order to check training status, use the PersonGroup Get method to obtain training status.

#Request Body
body = dict()

#Request URL
FaceApiTrain =
'https://westus.api.cognitive.microsoft.com/face/v1.0/persongroups/'+personGroupId+ '/train'

try:
    # REST Call
    response = requests.post(FaceApiTrain, data=body, headers=headers)
    print("RESPONSE:" + str(response.status_code))
except Exception as e:
    print(e)

Step 5: Final Step Identify Face Image. This Involves in Total 3 Different Face APIs.

5.1 First API: Face/Detect

This will return a face Id which is valid for 24 hrs. We will use this face id in the next API to identify the character. Let’s try to give new random Chandlers face image and see if the model identifies it accurately.

# Request Body
body = dict()
body["url"] =
"https://upload.wikimedia.org/wikipedia/en/thumb/6/6c/Matthew_Perry_as_Chandler_Bing.jpg/220px-Matthew_Perry_as_Chandler_Bing.jpg"
body = str(body)

# Request URL
FaceApiDetect =
'https://westus.api.cognitive.microsoft.com/face/v1.0/detect?returnFaceId=true'

try:
    # REST Call
    response = requests.post(FaceApiDetect, data=body, headers=headers)
    responseJson = response.json()
    faceId = responseJson[0]["faceId"]
    print("FACE ID: "+str(faceId))
except Exception as e:
    print(e)

5.2 Second API: Face/Identify

This is a one-to-many identification to find the closest matches of the specific query person face from a person group. For each face in the faceIds array, Face/Identify will compute similarities between the query face and all the faces in the PersonGroup (given by personGroupId), and return candidate person(s) for that face ranked by similarity confidence.

As a result, this will return the personId which we will use in the next optional step to find out the metadata stored with that person id like his name.

faceIdsList = [faceId]

# Request Body
body = dict()
body["personGroupId"] = personGroupId
body["faceIds"] = faceIdsList
body["maxNumOfCandidatesReturned"] = 1
body["confidenceThreshold"] = 0.5
body = str(body)

# Request URL
FaceApiIdentify = 'https://westus.api.cognitive.microsoft.com/face/v1.0/identify'

try:
    # REST Call
    response = requests.post(FaceApiIdentify, data=body, headers=headers)
    responseJson = response.json()
    personId = responseJson[0]["candidates"][0]["personId"]
    confidence = responseJson[0]["candidates"][0]["confidence"]
    print("PERSON ID: " + str(personId) + ", CONFIDENCE :" + str(confidence))
except Exception as e:
    print("Could not detect")

5.3 Third API: PersonGroup/Person

Finally, retrieve a person’s name and userData, and the persisted faceIds representing the registered person face image.

# Request URL
FaceApiGetPerson =
'https://westus.api.cognitive.microsoft.com/face/v1.0/persongroups/'+personGroupI
d+'/persons/'+personId

try:
    response = requests.get(FaceApiGetPerson, headers=headers)
    responseJson = response.json()
    print("This Is "+str(responseJson["name"]))
except Exception as e:
    print(e)

Conclusion

By now you should be able to detect faces using the Face API. I hope you guys had as much fun as me working on this tutorial. The Microsoft Azure Face APIs are really great but in my opinion still have some way to go to be completely stable and accurate.

We will soon link to an accompanying GitHub repository. Stay tuned!

Tags: