All Components

Integrating Kendo UI with Azure Face API

This tutorial demonstrates how to configure the Kendo UI Upload for Angular to send images directly to the Azure Face API and then consume the received data from the service.

The Kendo UI components support cognitive service integration—for example, Azure Face API. The cloud-based Face API provides access to a set of advanced face algorithms which enable the detection and identification of faces in images.

Prerequisites

Sending Files to Face API with the Upload

The following example demonstrates how to send files to Azure Face API by using the built-in functionality of the Upload together with a custom HttpInterceptor which transforms the selected image files to the ArrayBuffer that the Face API expects. This approach enables the reading of files with FileReader and the sending of buffer data to the cloud service afterwards. In turn, the Face API analyzes the image, detects the available faces, and returns data in a JSON format.

  1. Create the HTML markup for initializing the Upload component, an image tag for displaying the uploaded image, and an area that will be used to display the received data from the Face API.

    The following example demonstrates the code for the Upload component.

    import { Component, EventEmitter, Output } from '@angular/core';
    import { HttpHeaders, HttpClient } from '@angular/common/http';
    
    @Component({
    selector: 'my-upload',
    template: `
    <kendo-upload #upload
       [withCredentials]="false"
       [saveUrl]="uploadSaveUrl"
       [removeUrl]="uploadRemoveUrl"
       [showFileList]="false"
       (success)="onSuccess($event)">
    </kendo-upload>
    `
    })
    export class UploadComponent {
    @Output() public imageReceived: EventEmitter<any> = new EventEmitter<any>();
    @Output() public responseReceived: EventEmitter<any> = new EventEmitter<any>();
    public uriBase = "https://westeurope.api.cognitive.microsoft.com/face/v1.0/detect";
    private params = {
       "returnFaceId": "true",
       "returnFaceLandmarks": "false",
       "returnFaceAttributes": "age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise",
    };
    
    onSuccess(e) {
       this.responseReceived.emit(e.response.body);
       const fileData = e.files[0].rawFile;
    
       const fileReader = new FileReader();
    
       fileReader.onload = (event) => {
           this.imageReceived.emit(event.target.result);
       }
    
       fileReader.readAsDataURL(fileData);
    }
    
    uploadSaveUrl = 'saveUrl'; // this request will be intercepted by the HttpInterceptor for further processing
    uploadRemoveUrl = 'removeUrl'; // inapplicable in this scenario as we are not actually removing images from the Face API endpoint
    }

    The following example demonstrates the markup for the host component.

    <h1 class="text-center text-primary">Integrating Kendo UI with Azure FaceAPI </h1>
    <my-upload
       (imageReceived)="onImageReceived($event)"
       (responseReceived)="onResponseReceived($event)"></my-upload>
    <br />
    <div class="main-content row">
       <div class="col-xs-12 col-md-6">
           <h4 class="text-center">Image Preview:</h4>
           <div>
               <img [src]="image" width="300" />
           </div>
       </div>
       <div class="col-xs-12 col-md-6">
           <h4 class="text-center">FaceAPI Response Data:</h4>
           <div style="width: 300px; height: 300px;">
           <pre>{{ faceAPIResponse | json }}</pre>
           </div>
       </div>
    </div>
  2. Create a custom HttpInterceptor to intercept the request, to perform the necessary steps for authentication with the Face API service by transforming the selected file to an ArrayBuffer, and pass the Face API response to the Upload component.

    // create an ArrayBuffer from the file that is being uploaded
    const read = (req) => {
    const file = req.body.get('files');
    const reader = new FileReader();
    
    const obs = Observable.create(observer => {
       reader.onload = ({target}) =>
       observer.next(target.result);
    });
    
    reader.readAsArrayBuffer(file);
    return obs;
    };
    
    @Injectable()
    export class UploadInterceptor implements HttpInterceptor {
    private uriBase = "https://westeurope.api.cognitive.microsoft.com/face/v1.0/detect";
    private params = {
       "returnFaceId": "true",
       "returnFaceLandmarks": "false",
       "returnFaceAttributes": "age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise",
    };
    private saveUrl = this.uriBase;
    private headers = new HttpHeaders({
       "Content-Type": "application/octet-stream",
       "Ocp-Apim-Subscription-Key": "XXXX"
    })
    
    intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> {
       if (req.url === 'saveUrl') {
       return read(req).pipe(
           map(file => // switch the endpoint to the actual Face API endpoint and add the authentication headers
               req.clone({
                   headers: this.headers,
                   body: file,
                   url: this.uriBase
               })
           ),
           switchMap(x => next.handle(x))
           );
    
       }
    
    return next.handle(req);
    }
    }
  3. Consume the response and display the uploaded image and the respective data which are returned by the Face API on the client.

    @Component({
       selector: 'my-app',
       template: `
       <h1 class="text-center text-primary">Integrating Kendo UI with Azure FaceAPI </h1>
       <my-upload
           (imageReceived)="onImageReceived($event)"
           (responseReceived)="onResponseReceived($event)"></my-upload>
       <br />
       <div class="main-content row">
           <div class="col-xs-12 col-md-6">
               <h4 class="text-center">Image Preview:</h4>
               <div>
                   <img [src]="image" width="300" />
               </div>
           </div>
           <div class="col-xs-12 col-md-6">
               <h4 class="text-center">FaceAPI Response Data:</h4>
               <div style="width: 300px; height: 300px;">
                   <pre>{{ faceAPIResponse | json }}</pre>
               </div>
           </div>
       </div>
       `
    })
    export class AppComponent {
    public image = "XXX"; // initial image placeholder (optional)
    public faceAPIResponse;
    public onImageReceived(e) {
       this.image = e;
    }
    
    public onResponseReceived(e) {
       this.faceAPIResponse = e;
    }
    }

Breakdown of the Sample Implementation

  1. Set up the custom HttpInterceptor in such a way that the request can be captured and customized.
  2. Replace the URL of the endpoint with the actual Face API URL, add credentials to the headers, and (optionally) customize the Face API parameters.
  3. Create an ArrayBuffer from the selected file and send it to the remote endpoint in the body of the post request.
  4. The received data from the Face API is based on the parameters that are initially passed to the service. The service returns the coordinates, gender, age, emotion, and other face attributes of the recognized faces.
In this article