Detect and Analyze Faces

The ability to detect and analyze human faces is a core AI capability. In this exercise, you’ll explore two Azure AI Services that you can use to work with faces in images: the Azure AI Vision service, and the Face service.

Important: This lab can be completed without requesting any additional access to restricted features.

Note: From June 21st 2022, capabilities of Azure AI services that return personally identifiable information are restricted to customers who have been granted limited access. Additionally, capabilities that infer emotional state are no longer available. For more details about the changes Microsoft has made, and why - see Responsible AI investments and safeguards for facial recognition.

Clone the repository for this course

If you have not already done so, you must clone the code repository for this course:

  1. Start Visual Studio Code.
  2. Open the palette (SHIFT+CTRL+P) and run a Git: Clone command to clone the https://github.com/MicrosoftLearning/mslearn-ai-vision repository to a local folder (it doesn’t matter which folder).
  3. When the repository has been cloned, open the folder in Visual Studio Code.
  4. Wait while additional files are installed to support the C# code projects in the repo.

    Note: If you are prompted to add required assets to build and debug, select Not Now.

Provision an Azure AI Services resource

If you don’t already have one in your subscription, you’ll need to provision an Azure AI Services resource.

  1. Open the Azure portal at https://portal.azure.com, and sign in using the Microsoft account associated with your Azure subscription.
  2. In the top search bar, search for Azure AI services, select Azure AI Services, and create an Azure AI services multi-service account resource with the following settings:
    • Subscription: Your Azure subscription
    • Resource group: Choose or create a resource group (if you are using a restricted subscription, you may not have permission to create a new resource group - use the one provided)
    • Region: Choose any available region
    • Name: Enter a unique name
    • Pricing tier: Standard S0
  3. Select the required checkboxes and create the resource.
  4. Wait for deployment to complete, and then view the deployment details.
  5. When the resource has been deployed, go to it and view its Keys and Endpoint page. You will need the endpoint and one of the keys from this page in the next procedure.

Prepare to use the Azure AI Vision SDK

In this exercise, you’ll complete a partially implemented client application that uses the Azure AI Vision SDK to analyze faces in an image.

Note: You can choose to use the SDK for either C# or Python. In the steps below, perform the actions appropriate for your preferred language.

  1. In Visual Studio Code, in the Explorer pane, browse to the 04-face folder and expand the C-Sharp or Python folder depending on your language preference.
  2. Right-click the computer-vision folder and open an integrated terminal. Then install the Azure AI Vision SDK package by running the appropriate command for your language preference:

    C#

     dotnet add package Azure.AI.Vision.ImageAnalysis -v 0.15.1-beta.1
    

    Python

     pip install azure-ai-vision==0.15.1b1
    
  3. View the contents of the computer-vision folder, and note that it contains a file for configuration settings:
    • C#: appsettings.json
    • Python: .env
  4. Open the configuration file and update the configuration values it contains to reflect the endpoint and an authentication key for your Azure AI services resource. Save your changes.

  5. Note that the computer-vision folder contains a code file for the client application:

    • C#: Program.cs
    • Python: detect-people.py
  6. Open the code file and at the top, under the existing namespace references, find the comment Import namespaces. Then, under this comment, add the following language-specific code to import the namespaces you will need to use the Azure AI Vision SDK:

    C#

     // import namespaces
     using Azure.AI.Vision.Common;
     using Azure.AI.Vision.ImageAnalysis;
    

    Python

     # import namespaces
     import azure.ai.vision as sdk
    

View the image you will analyze

In this exercise, you will use the Azure AI Vision service to analyze an image of people.

  1. In Visual Studio Code, expand the computer-vision folder and the images folder it contains.
  2. Select the people.jpg image to view it.

Detect faces in an image

Now you’re ready to use the SDK to call the Vision service and detect faces in an image.

  1. In the code file for your client application (Program.cs or detect-people.py), in the Main function, note that the code to load the configuration settings has been provided. Then find the comment Authenticate Azure AI Vision client. Then, under this comment, add the following language-specific code to create and authenticate a Azure AI Vision client object:

    C#

     // Authenticate Azure AI Vision client
     var cvClient = new VisionServiceOptions(
         aiSvcEndpoint,
         new AzureKeyCredential(aiSvcKey));
    

    Python

     # Authenticate Azure AI Vision client
     cv_client = sdk.VisionServiceOptions(ai_endpoint, ai_key)
    
  2. In the Main function, under the code you just added, note that the code specifies the path to an image file and then passes the image path to a function named AnalyzeImage. This function is not yet fully implemented.

  3. In the AnalyzeImage function, under the comment Specify features to be retrieved (PEOPLE), add the following code:

    C#

     // Specify features to be retrieved (PEOPLE)
     Features =
         ImageAnalysisFeature.People
    

    Python

     # Specify features to be retrieved (PEOPLE)
     analysis_options = sdk.ImageAnalysisOptions()
        
     features = analysis_options.features = (
         sdk.ImageAnalysisFeature.PEOPLE
     )    
    
  4. In the AnalyzeImage function, under the comment Get image analysis, add the following code:

    C#

     // Get image analysis
     using var imageSource = VisionSource.FromFile(imageFile);
        
     using var analyzer = new ImageAnalyzer(serviceOptions, imageSource, analysisOptions);
        
     var result = analyzer.Analyze();
        
     if (result.Reason == ImageAnalysisResultReason.Analyzed)
     {
         // Get people in the image
         if (result.People != null)
         {
             Console.WriteLine($" People:");
            
             // Prepare image for drawing
             System.Drawing.Image image = System.Drawing.Image.FromFile(imageFile);
             Graphics graphics = Graphics.FromImage(image);
             Pen pen = new Pen(Color.Cyan, 3);
             Font font = new Font("Arial", 16);
             SolidBrush brush = new SolidBrush(Color.WhiteSmoke);
            
             foreach (var person in result.People)
             {
                 // Draw object bounding box if confidence > 50%
                 if (person.Confidence > 0.5)
                 {
                     // Draw object bounding box
                     var r = person.BoundingBox;
                     Rectangle rect = new Rectangle(r.X, r.Y, r.Width, r.Height);
                     graphics.DrawRectangle(pen, rect);
            
                     // Return the confidence of the person detected
                     Console.WriteLine($"   Bounding box {person.BoundingBox}, Confidence {person.Confidence:0.0000}");
                 }
             }
            
             // Save annotated image
             String output_file = "detected_people.jpg";
             image.Save(output_file);
             Console.WriteLine("  Results saved in " + output_file + "\n");
         }
     }
     else
     {
         var errorDetails = ImageAnalysisErrorDetails.FromResult(result);
         Console.WriteLine(" Analysis failed.");
         Console.WriteLine($"   Error reason : {errorDetails.Reason}");
         Console.WriteLine($"   Error code : {errorDetails.ErrorCode}");
         Console.WriteLine($"   Error message: {errorDetails.Message}\n");
     }
        
    

    Python

     # Get image analysis
     image = sdk.VisionSource(image_file)
        
     image_analyzer = sdk.ImageAnalyzer(cv_client, image, analysis_options)
        
     result = image_analyzer.analyze()
        
     if result.reason == sdk.ImageAnalysisResultReason.ANALYZED:
         # Get people in the image
         if result.people is not None:
             print("\nPeople in image:")
            
             # Prepare image for drawing
             image = Image.open(image_file)
             fig = plt.figure(figsize=(image.width/100, image.height/100))
             plt.axis('off')
             draw = ImageDraw.Draw(image)
             color = 'cyan'
            
             for detected_people in result.people:
                 # Draw object bounding box if confidence > 50%
                 if detected_people.confidence > 0.5:
                     # Draw object bounding box
                     r = detected_people.bounding_box
                     bounding_box = ((r.x, r.y), (r.x + r.w, r.y + r.h))
                     draw.rectangle(bounding_box, outline=color, width=3)
                
                     # Return the confidence of the person detected
                     print(" {} (confidence: {:.2f}%)".format(detected_people.bounding_box, detected_people.confidence * 100))
                        
             # Save annotated image
             plt.imshow(image)
             plt.tight_layout(pad=0)
             outputfile = 'detected_people.jpg'
             fig.savefig(outputfile)
             print('  Results saved in', outputfile)
        
     else:
         error_details = sdk.ImageAnalysisErrorDetails.from_result(result)
         print(" Analysis failed.")
         print("   Error reason: {}".format(error_details.reason))
         print("   Error code: {}".format(error_details.error_code))
         print("   Error message: {}".format(error_details.message))
    
  5. Save your changes and return to the integrated terminal for the computer-vision folder, and enter the following command to run the program:

    C#

     dotnet run
    

    Python

     python detect-people.py
    
  6. Observe the output, which should indicate the number of faces detected.
  7. View the detected_people.jpg file that is generated in the same folder as your code file to see the annotated faces. In this case, your code has used the attributes of the face to label the location of the top left of the box, and the bounding box coordinates to draw a rectangle around each face.

Prepare to use the Face SDK

While the Azure AI Vision service offers basic face detection (along with many other image analysis capabilities), the Face service provides more comprehensive functionality for facial analysis and recognition.

  1. In Visual Studio Code, in the Explorer pane, browse to the 04-face folder and expand the C-Sharp or Python folder depending on your language preference.
  2. Right-click the face-api folder and open an integrated terminal. Then install the Face SDK package by running the appropriate command for your language preference:

    C#

     dotnet add package Microsoft.Azure.CognitiveServices.Vision.Face --version 2.8.0-preview.3
    

    Python

     pip install azure-cognitiveservices-vision-face==0.6.0
    
  3. View the contents of the face-api folder, and note that it contains a file for configuration settings:
    • C#: appsettings.json
    • Python: .env
  4. Open the configuration file and update the configuration values it contains to reflect the endpoint and an authentication key for your Azure AI services resource. Save your changes.

  5. Note that the face-api folder contains a code file for the client application:

    • C#: Program.cs
    • Python: analyze-faces.py
  6. Open the code file and at the top, under the existing namespace references, find the comment Import namespaces. Then, under this comment, add the following language-specific code to import the namespaces you will need to use the Vision SDK:

    C#

     // Import namespaces
     using Microsoft.Azure.CognitiveServices.Vision.Face;
     using Microsoft.Azure.CognitiveServices.Vision.Face.Models;
    

    Python

     # Import namespaces
     from azure.cognitiveservices.vision.face import FaceClient
     from azure.cognitiveservices.vision.face.models import FaceAttributeType
     from msrest.authentication import CognitiveServicesCredentials
    
  7. In the Main function, note that the code to load the configuration settings has been provided. Then find the comment Authenticate Face client. Then, under this comment, add the following language-specific code to create and authenticate a FaceClient object:

    C#

     // Authenticate Face client
     ApiKeyServiceClientCredentials credentials = new ApiKeyServiceClientCredentials(cogSvcKey);
     faceClient = new FaceClient(credentials)
     {
         Endpoint = cogSvcEndpoint
     };
    

    Python

     # Authenticate Face client
     credentials = CognitiveServicesCredentials(cog_key)
     face_client = FaceClient(cog_endpoint, credentials)
    
  8. In the Main function, under the code you just added, note that the code displays a menu that enables you to call functions in your code to explore the capabilities of the Face service. You will implement these functions in the remainder of this exercise.

Detect and analyze faces

One of the most fundamental capabilities of the Face service is to detect faces in an image, and determine their attributes, such as head pose, blur, the presence of spectacles, and so on.

  1. In the code file for your application, in the Main function, examine the code that runs if the user selects menu option 1. This code calls the DetectFaces function, passing the path to an image file.
  2. Find the DetectFaces function in the code file, and under the comment Specify facial features to be retrieved, add the following code:

    C#

     // Specify facial features to be retrieved
     IList<FaceAttributeType> features = new FaceAttributeType[]
     {
         FaceAttributeType.Occlusion,
         FaceAttributeType.Blur,
         FaceAttributeType.Glasses
     };
    

    Python

     # Specify facial features to be retrieved
     features = [FaceAttributeType.occlusion,
                 FaceAttributeType.blur,
                 FaceAttributeType.glasses]
    
  3. In the DetectFaces function, under the code you just added, find the comment Get faces and add the following code:

C#

// Get faces
using (var imageData = File.OpenRead(imageFile))
{    
    var detected_faces = await faceClient.Face.DetectWithStreamAsync(imageData, returnFaceAttributes: features, returnFaceId: false);

    if (detected_faces.Count() > 0)
    {
        Console.WriteLine($"{detected_faces.Count()} faces detected.");

        // Prepare image for drawing
        Image image = Image.FromFile(imageFile);
        Graphics graphics = Graphics.FromImage(image);
        Pen pen = new Pen(Color.LightGreen, 3);
        Font font = new Font("Arial", 4);
        SolidBrush brush = new SolidBrush(Color.White);
        int faceCount=0;

        // Draw and annotate each face
        foreach (var face in detected_faces)
        {
            faceCount++;
            Console.WriteLine($"\nFace number {faceCount}");
            
            // Get face properties
            Console.WriteLine($" - Mouth Occluded: {face.FaceAttributes.Occlusion.MouthOccluded}");
            Console.WriteLine($" - Eye Occluded: {face.FaceAttributes.Occlusion.EyeOccluded}");
            Console.WriteLine($" - Blur: {face.FaceAttributes.Blur.BlurLevel}");
            Console.WriteLine($" - Glasses: {face.FaceAttributes.Glasses}");

            // Draw and annotate face
            var r = face.FaceRectangle;
            Rectangle rect = new Rectangle(r.Left, r.Top, r.Width, r.Height);
            graphics.DrawRectangle(pen, rect);
            string annotation = $"Face number {faceCount}";
            graphics.DrawString(annotation,font,brush,r.Left, r.Top);
        }

        // Save annotated image
        String output_file = "detected_faces.jpg";
        image.Save(output_file);
        Console.WriteLine(" Results saved in " + output_file);   
    }
}

Python

# Get faces
with open(image_file, mode="rb") as image_data:
    detected_faces = face_client.face.detect_with_stream(image=image_data,
                                                            return_face_attributes=features,                     return_face_id=False)

    if len(detected_faces) > 0:
        print(len(detected_faces), 'faces detected.')

        # Prepare image for drawing
        fig = plt.figure(figsize=(8, 6))
        plt.axis('off')
        image = Image.open(image_file)
        draw = ImageDraw.Draw(image)
        color = 'lightgreen'
        face_count = 0

        # Draw and annotate each face
        for face in detected_faces:

            # Get face properties
            face_count += 1
            print('\nFace number {}'.format(face_count))

            detected_attributes = face.face_attributes.as_dict()
            if 'blur' in detected_attributes:
                print(' - Blur:')
                for blur_name in detected_attributes['blur']:
                    print('   - {}: {}'.format(blur_name, detected_attributes['blur'][blur_name]))
                    
            if 'occlusion' in detected_attributes:
                print(' - Occlusion:')
                for occlusion_name in detected_attributes['occlusion']:
                    print('   - {}: {}'.format(occlusion_name, detected_attributes['occlusion'][occlusion_name]))

            if 'glasses' in detected_attributes:
                print(' - Glasses:{}'.format(detected_attributes['glasses']))

            # Draw and annotate face
            r = face.face_rectangle
            bounding_box = ((r.left, r.top), (r.left + r.width, r.top + r.height))
            draw = ImageDraw.Draw(image)
            draw.rectangle(bounding_box, outline=color, width=5)
            annotation = 'Face number {}'.format(face_count)
            plt.annotate(annotation,(r.left, r.top), backgroundcolor=color)

        # Save annotated image
        plt.imshow(image)
        outputfile = 'detected_faces.jpg'
        fig.savefig(outputfile)

        print('\nResults saved in', outputfile)
  1. Examine the code you added to the DetectFaces function. It analyzes an image file and detects any faces it contains, including attributes for occlusion, blur, and the presence of spectacles. The details of each face are displayed, including a unique face identifier that is assigned to each face; and the location of the faces is indicated on the image using a bounding box.
  2. Save your changes and return to the integrated terminal for the face-api folder, and enter the following command to run the program:

    C#

     dotnet run
    

    The C# output may display warnings about asynchronous functions now using the await operator. You can ignore these.

    Python

     python analyze-faces.py
    
  3. When prompted, enter 1 and observe the output, which should include the ID and attributes of each face detected.
  4. View the detected_faces.jpg file that is generated in the same folder as your code file to see the annotated faces.

More information

There are several additional features available within the Face service, but following the Responsible AI Standard those are restricted behind a Limited Access policy. These features include identifying, verifying, and creating facial recognition models. To learn more and apply for access, see the Limited Access for Azure AI Services.

For more information about using the Azure AI Vision service for face detection, see the Azure AI Vision documentation.

To learn more about the Face service, see the Face documentation.