In order to recognize people in an image you need to first train up your model. This is done by uploading pictures containing the face of the person you wish to be able to identify, and specifying a name. There is no pre-built UI for performing the task, but it can be completed with a couple of lines of code.
NOTE: In order to complete this portion of the lab, a standard subscription is required for Face API. You can see the pricing details for Face API for more information.
As we've seen, in order to us a Cognitive Service we need to have a key. You can create an All-in-One key, which will give you access to almost every Cognitive Service, or create a key for each separate service. The advantage to the latter (creating a key for each service) is there is a free pricing tier available. We're going to create a key for Face API by using the Azure CLI.
In a command or terminal window, execute the following command.
az cognitiveservices account create --resource-group contoso-travel-rg --name face-api --location northcentralus --kind Face --sku S0 --yes
NOTE: As before, we are placing our service in northcentralus. We want to ensure all related services are placed in the same region for performance.
Retrieve the key you just created by using the Azure CLI. Make sure you log this key somewhere, as we'll be using it momentarily.
az cognitiveservices account keys list --resource-group contoso-travel-rg --name face-api --query key1 --output tsv
We're using dotenv to manage our environmental variables. We now have a new key we need to store, the one for Face API. At the end of .env, add the following line
FACE_API_KEY=<your_face_api_key>
Update <your_face_api_key>
with the key and URL you received from the prior step.
We're going to add the first part of the necessary functionality to recognize faces, which is to train the model. We will start by retrieving our key, then creating our client, then adding the code required to perform the training.
In app.py, in the bottom of the # Load keys section
(right below translate_key = os.environ["TRANSLATE_KEY"]
), add the following code:
face_key = os.environ["FACE_KEY"]
We're going to use FaceClient
to interact with Face API. FaceClient
is similar to the ComputerVisionClient
we worked with previously.
In app.py, below the line which reads # Create face_client
, add the following code:
from azure.cognitiveservices.vision.face import FaceClient
face_credentials = CognitiveServicesCredentials(face_key)
face_client = FaceClient(endpoint, face_credentials)
person_group_id = "reactor"
Similar to the steps performed with Computer Vision, we import the class we'll be using (FaceClient
), create an instance of CognitiveServicesCredentials
with the key, and then create the client by specifying the endpoint and credentials. The final line of person_group_id = "reactor"
is the name of the Person Group we're going to be creating. This will define our "closed universe", in which we will only be able to detect people we've already trained on.
We're going to create a helper function named train_person
. We will put all of the necessary code to create or update a person in our person group.
At the bottom of app.py, add the following code. You'll notice there's a fair bit of it, as FaceClient
doesn't necessarily provide some of the functions we might expect. We will break down each section to help show what we're doing in this code.
def train_person(client, person_group_id, name, image):
# Try to create the group, and just pass if it doesn't exist
try:
client.person_group.create(person_group_id, name=person_group_id)
except:
pass
name = name.lower()
# No get_by_name function, so get all persons
people = client.person_group_person.list(person_group_id)
# See if one exists with our name
people_with_name = list(filter(lambda p: p.name == name, people))
if len(people_with_name) > 0:
person = people_with_name[0]
operation = "Updated"
else:
# Create if doesn't exist
person = client.person_group_person.create(person_group_id, name)
operation = "Created"
# Add the face to the person
client.person_group_person.add_face_from_stream(person_group_id, person.person_id, image)
# Retrain the model
client.person_group.train(person_group_id)
return ["{} {}".format(operation, name)]
# Try to create the group, and just pass if it doesn't exist
try:
client.person_group.create(person_group_id, name=person_group_id)
except:
pass
The get
function on FaceClient
throws an exception if the group you wish to find doesn't exist. Because we need to make a round trip to the server anyway, we simply create
the group, using the same ID as the name, and then catch any error and suppress it by using pass
inside of except
.
name = name.lower()
# No get_by_name function, so get all persons
people = client.person_group_person.list(person_group_id)
# See if one exists with our name
people_with_name = list(filter(lambda p: p.name == name, people))
if len(people_with_name) > 0:
person = people_with_name[0]
operation = "Updated"
else:
# Create if doesn't exist
person = client.person_group_person.create(person_group_id, name)
operation = "Created"
We start by changing the name provided to lower case letters for normalization. Because this becomes our key, we want to ensure it's case insensitive.
We then retrieve all of the people in our person group by using person_group_person.list
and passing in the ID of our group. Because there is no get
function which allows us to load a person by name, we use the filter function in Python to find the person who matches the given name.
If there's at least one result, as revealed by len
, we know there's a person with that name already. We store the person, and set the operation message to be Updated, since we'll be adding a face to an existing person. If the person doesn't exist, we create
the person, and then set the operation message to Created.
# Add the face to the person
client.person_group_person.add_face_from_stream(person_group_id, person.person_id, image)
# Retrain the model
client.person_group.train(person_group_id)
return ["{} {}".format(operation, name)]
We close by calling add_face_from_stream
to add a person to a person group. We train
the model. And finally we set an output message by passing in the operation (Created or Updated) and the name of the person.
Inside of the existing train
function, immediately below the comment # TODO: Add code to create or update person
, add the following code:
# TODO: Add code to create or update person
messages = train_person(face_client, person_group_id, name, image.blob)
NOTE: The tab at the beginning of the line of code is required. Python uses tab levels to identify enclosures, and we want to put the call to
train_person
insidetrain
. It should be in line with the existing comment.
We call our train_person
function by passing in the face_client
, the ID of our group, and the blob of the image.
After saving all files, the process running our site should automatically reload. If you closed the command or terminal window you used to launch the site previously, you can open a new one, navigate to the directory containing your code, and then execute the following commands:
# Windows
set FLASK_ENV=development
flask run
# macOS and Linux
export FLASK_ENV=development
flask run
In a browser, navigate to http://localhost:5000/train. Type the name of the person you wish to train, and then click Upload to select a picture of the person (such as yourself!). The image you use can only have one face. There is no UI for seeing the model in action; we're going to create that in the next section.
We could have added additional code to allow someone to choose an existing name from a dropdown list, and to click an additional button to create a new name. This would have meant additional code, which we might not have had time to complete during the workshop. If you'd like, you can experiment with the code and see how you might be able to introduce functionality.
We've seen how to create the necessary key for Face API, and add the appropriate code to train the model with a person. Next, we'll see how we can detect a person in an image.