1. 程式人生 > >Create a web app to show the age estimation from the detected human faces

Create a web app to show the age estimation from the detected human faces

Summary

The IBM Model Asset Exchange (MAX) gives application developers without data science experience easy access to prebuilt machine learning models. This code pattern shows how to create a simple web application that gets permission from a user to access the webcam and then visualizes the output through the

Facial Age Estimator MAX model. Specifically, the web app uses the webcam to transmit the streaming video to the MAX model, receives the estimated ages and the bounding boxes, and then displays the result through the web UI.

Description

The biological age of people often provides significant information for applications, for example, surveillance or product recommendations. Existing commercial devices such as mobile phones and webcams are used to create visual data (images and videos). Given human faces as visual data, the Facial Age Estimator model predicts the ages of the detected faces. With the predicted ages, the information can be applied to different algorithms such as “grouping,” which provides the observations in statistics — different groups of people for various activities.

In this code pattern, we use one of the models from the Model Asset Exchange, an exchange where developers can find and experiment with open source deep learning models. Specifically, we use the Facial Age Estimator to create a web application that detects human faces and then outputs the ages with the bounding boxes of the associated detected faces. The web application provides a user-friendly interface backed by a lightweight Python server. The server takes webcam images as input through the UI and sends them to a REST endpoint for the model. The model’s REST endpoint is set up using the Docker image provided on MAX. The web UI displays the estimated age with the associated bounding box for each person.

When you have completed this code pattern, you will understand how to:

  • Build a Docker image of the Facial Age Estimator MAX model
  • Deploy a deep learning model with a REST endpoint
  • Generate the estimated ages for an image using the MAX model’s REST API
  • Run a web application that uses the model’s REST API

Flow

flow

  1. The server sends the captured video frame by frame from the webcam to the model API.
  2. The web UI requests the age and bounding box data for the frames from the server.
  3. The server receives data from the model’s API and updates the result to web UI.

Instructions

Get the detailed instructions in the README file. These steps will show you how to:

  1. Deploy to IBM Cloud.
  2. Deploy on Kubernetes.
  3. Run locally.