How I created SmileToUnlock with StencilJS - Part 2/2

In Part 1 of the article, I talked about Web Components and how to build them using StencilJS.

In Part 2 I explain how I created Smile to Unlock using HTML5 APIs and the Azure Emotive API.

First I am going to explain how to grab a picture of the user with the help of HTML5 APIs. Then I will use that picture with Azure Emotive APIs to calculate the happiness. To conclude, I will show you how to emit an event from the Web Component to let the consumer know that the user has smiled.


All the source code for this article can be found here: be found here:

How to capture a picture from the user’s camera?

smile to unlock flash

You can use the built-in camera API to just grab a picture of the user. For a more dramatic effect I wanted to show the user what the camera saw via live feed, give them some time to get ready and then take a snapshot after three seconds.

In order to do that, following steps need to be completed:

  1. Start the camera and stream the output to a video tag in your page.

  2. Capture a frame from the video as a PNG file.

To begin with, the template for our component needs to have a video tag like this:

<video id="video">Video stream not available.</video>


It also has a video ID.

In our component code we then grab a reference to this element with perhaps some code like this: = this.el.querySelector("#video") as HTMLVideoElement;

Now that we have a reference to the video element, we can start the camera and feed the output to this video element on the page:

	.getUserMedia({ video: true }) (1)
	.then(stream => { = window.URL.createObjectURL(stream); (2)
	return; (3)
1 We request the camera feed.
2 We convert the returned stream into a URL which we can assign to the src property of the video element.
3 We start the camera.

The above code simply starts showing the live feed of the camera in the video tag. shot Next, we want to grab a still shot of the camera feed. The method I used was to take a still image from the video and draw it into a canvas element and then hide the video element.

First thing to do is to add the canvas element to our template:

<canvas id="picture">&nbsp;</canvas>

Like before, we grab a reference to this canvas and store it on our component:

this.canvas = this.el.querySelector("canvas") as HTMLCanvasElement;

Then with a few lines of code we can grab a still shot from the video stream and store it in the canvas, like this:

let context = this.canvas.getContext("2d"), (1)
    width =,
    height =;

if (width && height) {
	this.canvas.width = width; (2)
	this.canvas.height = height;

	context.drawImage(, 0, 0, width, height); (3)
1 Get a reference to the canvas and the width/height of the video.
2 Set the width and height of the canvas the same as the video.
3 On the canvas make a copy of the current frame in the video.

How to detect a smile?

Detecting a smile might sound like the most complex part of this whole component but, in fact, it turned out to be the simplest. I used the Azure Emotive API.

This is part of a collection of APIs on the Azure platform called Cognitive Services. A collection of Artificial Intelligence APIs, which either have high-level machine learning built-in, or are based on pre-trained models. The Emotive APIs are the latter. They are based on a pre-trained model, which, if passed an image, will return an array of faces with some data about the detected emotions on those faces.

    "faceRectangle": {
      "top": 114,
      "left": 212,
      "width": 65,
      "height": 65
    "scores": {
      "anger": 1.0570484e-8,
      "contempt": 1.52679547e-9,
      "disgust": 1.60232943e-7,
      "fear": 6.00660363e-12,
      "happiness": 0.9999998,
      "neutral": 9.449728e-9,
      "sadness": 1.23025981e-8,
      "surprise": 9.91396e-10

One of the things I love about the Cognitive Services APIs is that you don’t need to have a deep understanding about AI to use them. Pretty much all you need to do to in order to use them, is to make request to a URL.

Request an access key

The easiest way to get started is to grab a trial key from the cognitive services page. It provides you with free access for 30 days and requires minimal setup. Even after the free trial period, the API is free for the first 30K calls (yes, I said 30K!)

(1) Head over to and click Get Api Key for the Emotion API.

cog serv step 1

(2) Accept any terms

cog serv step 2

(3) Create an account or login

cog serv step 3

(4) That’s it! Make a note of the keys and endpoint URL, you will need that for later.

cog serv step 4

Call the API

To call the API, we make a POST request to the API endpoint, given to us when we go through the registration. For me the API was:

The API has a few parameters. You can pass in a URL of an image to examine or pass in the image as a BLOB in the API call. We use the latter method. Now that the picture is in a canvas element, however, it’s actually really easy to get the BLOB: we just use the toBlob function on canvas, like this:

  getImageBlob(): Promise<Blob> { (1)
    return new Promise(resolve => {
      this.canvas.toBlob(blob => resolve(blob)); (2)
1 This is a helper function on our component, which returns a Promise, which resolves to a BLOB.
2 The toBlob function has a callback which we use to resolve the Promise and pass back the BLOB.

We then use this BLOB to call the Emotive API:

let blob = await this.getImageBlob(); (1)
let response = await fetch(this.apiUrl, {
	headers: {
	"Ocp-Apim-Subscription-Key": this.apiKey, (2)
	"Content-Type": "application/octet-stream" (3)
	method: "POST",
	body: blob (4)
let faces = await response.json(); (5)
1 We grab the blob from the canvas here.
2 To authenticate, we need to pass to the API our application key, which we got when we register with Cognitive Services.
3 We need to specify octet-stream, as we are passing in a binary blob.
4 We pass in the blob as the body of the message.
5 If succeeded, we then get the response as json.

faces now contains an object with emotional scores for each face that was detected, like this:

	"anger": 1.0570484e-8,
	"contempt": 1.52679547e-9,
	"disgust": 1.60232943e-7,
	"fear": 6.00660363e-12,
	"happiness": 0.9999998,
	"neutral": 9.449728e-9,
	"sadness": 1.23025981e-8,
	"surprise": 9.91396e-10

I assumed there would only be one user in the picture, so I just grab the hapiness score from the first face:

if (faces.length > 0) {
    let face = faces[0];
    this.happiness = face.scores.happiness;

If the user is happy enough, then we can send out an event, like this:

unlockContent() {
    this.userSmiled.emit({ score: this.happiness });

The consumer of the web component then uses this event to unlock the content:

var locker = document.querySelector('smile-to-unlock');

locker.addEventListener("userSmiled", function (ev) {

	// Hide the hider so we show the content

	// End the locker so the camera is shutdown


In this tutorial I explained only the core concepts that are required to have a grasp of the code, which can be found here: If you want to find out more have a read through the source code.

Few things to note, as of writing StencilJS, are still in development and functionality is being added at a furious rate. You should expect good things to come from that project in the future. But don’t expect this tutorial to keep up to date!

The cognitive APIs from Azure are super easy to get started with. Just create an account and you are ready to go. The Emotive API’s are certainly a lot of fun to play around with but there are loads more. Go take a look and be inspired to create something. If you do let me know, I’d love to share it out!