Unverified Commit 5a066131 by justadudewhohacks Committed by GitHub

Merge pull request #109 from justadudewhohacks/v-0_15

v0.15
parents 2145e28f 2d92c084
......@@ -5,3 +5,4 @@ tmp
proto
weights_uncompressed
weights_unused
docs
\ No newline at end of file
......@@ -8,3 +8,4 @@ weights_uncompressed
weights_unused
test
tools
docs
\ No newline at end of file
......@@ -4,57 +4,69 @@
**JavaScript API for face detection and face recognition in the browser implemented on top of the tensorflow.js core API ([tensorflow/tfjs-core](https://github.com/tensorflow/tfjs-core))**
Table of Contents:
* **[Resources](#resources)**
* **[Live Demos](#live-demos)**
* **[Tutorials](#tutorials)**
* **[Examples](#examples)**
* **[Running the Examples](#running-the-examples)**
* **[Available Models](#models)**
* **[Face Detection Models](#models-face-detection)**
* **[68 Point Face Landmark Detection Models](#models-face-landmark-detection)**
* **[Face Recognition Model](#models-face-recognition)**
* **[Usage](#usage)**
* **[Loading the Models](#usage-loading-models)**
* **[High Level API](#usage-high-level-api)**
* **[Displaying Detection Results](#usage-displaying-detection-results)**
* **[Face Detection Options](#usage-face-detection-options)**
* **[Utility Classes](#usage-utility-classes)**
* **[Other Useful Utility](#other-useful-utility)**
<a name="resources"></a>
# Resources
<a name="live-demos"></a>
## Live Demos
**[Check out the live demos!](https://justadudewhohacks.github.io/face-api.js/)**
<a name="tutorials"></a>
## Tutorials
Check out my face-api.js tutorials:
* **[face-api.js — JavaScript API for Face Recognition in the Browser with tensorflow.js](https://itnext.io/face-api-js-javascript-api-for-face-recognition-in-the-browser-with-tensorflow-js-bcc2a6c4cf07)**
* **[Realtime JavaScript Face Tracking and Face Recognition using face-api.js’ MTCNN Face Detector](https://itnext.io/realtime-javascript-face-tracking-and-face-recognition-using-face-api-js-mtcnn-face-detector-d924dd8b5740)**
**Check out the live demos [here](https://justadudewhohacks.github.io/face-api.js/)!**
<a name="examples"></a>
Table of Contents:
* **[Running the Examples](#running-the-examples)**
* **[About the Package](#about-the-package)**
* **[Face Detection - SSD Mobilenet v1](#about-face-detection-ssd)**
* **[Face Detection - Tiny Yolo v2](#about-face-detection-yolo)**
* **[Face Detection & 5 Point Face Landmarks - MTCNN](#about-face-detection-mtcnn)**
* **[Face Recognition](#about-face-recognition)**
* **[68 Point Face Landmark Detection](#about-face-landmark-detection)**
* **[Usage](#usage)**
* **[Loading the Models](#usage-load-models)**
* **[Face Detection - SSD Mobilenet v1](#usage-face-detection-ssd)**
* **[Face Detection - Tiny Yolo v2](#usage-face-detection-yolo)**
* **[Face Detection & 5 Point Face Landmarks - MTCNN](#usage-face-detection-mtcnn)**
* **[Face Recognition](#usage-face-recognition)**
* **[68 Point Face Landmark Detection](#usage-face-landmark-detection)**
* **[Shortcut Functions for Full Face Description](#shortcut-functions)**
# Examples
## Examples
### Face Recognition
## Face Recognition
![preview_face-detection-and-recognition](https://user-images.githubusercontent.com/31125521/41526995-1a90e4e6-72e6-11e8-96d4-8b2ccdee5f79.gif)
![preview_face-recognition_gif](https://user-images.githubusercontent.com/31125521/40313021-c3afdfec-5d14-11e8-86df-cf89a00668e2.gif)
![face-recognition-preview](https://user-images.githubusercontent.com/31125521/47384002-41e36f80-d706-11e8-8cd9-b3102c1bee67.png)
### Face Similarity
## Face Similarity
![preview_face-similarity](https://user-images.githubusercontent.com/31125521/40316573-0a1190c0-5d1f-11e8-8797-f6deaa344523.gif)
### Face Landmarks
## Face Landmark Detection
![face_landmarks_boxes_1](https://user-images.githubusercontent.com/31125521/46063403-fff9f480-c16c-11e8-900f-e4b7a3828d1d.jpg)
![face_landmarks_boxes_2](https://user-images.githubusercontent.com/31125521/46063404-00928b00-c16d-11e8-8f29-e9c50afd2bc8.jpg)
![preview_face_landmarks](https://user-images.githubusercontent.com/31125521/41507950-e121b05e-723c-11e8-89f2-d8f9348a8e86.png)
### Live Face Detection
**SSD Mobilenet v1**
## Realtime Face Tracking
![preview_video-facedetection](https://user-images.githubusercontent.com/31125521/41238649-bbf10046-6d96-11e8-9041-1de46c6adccd.jpg)
![output](https://user-images.githubusercontent.com/31125521/47383860-ea450400-d705-11e8-9880-d5d15d952661.gif)
**MTCNN**
## MTCNN
![mtcnn-preview](https://user-images.githubusercontent.com/31125521/42756818-0a41edaa-88fe-11e8-9033-8cd141b0fa09.gif)
......@@ -63,115 +75,92 @@ Table of Contents:
## Running the Examples
``` bash
cd examples
git clone https://github.com/justadudewhohacks/face-api.js.git
cd face-api.js/examples
npm i
npm start
```
Browse to http://localhost:3000/.
<a name="about-the-package"></a>
<a name="models"></a>
## About the Package
# Available Models
<a name="about-face-detection-ssd"></a>
<a name="models-face-detection"></a>
### Face Detection - SSD Mobilenet v1
## Face Detection Models
For face detection, this project implements a SSD (Single Shot Multibox Detector) based on MobileNetV1. The neural net will compute the locations of each face in an image and will return the bounding boxes together with it's probability for each face. This face detector is aiming towards obtaining high accuracy in detecting face bounding boxes instead of low inference time.
### SSD Mobilenet V1
For face detection, this project implements a SSD (Single Shot Multibox Detector) based on MobileNetV1. The neural net will compute the locations of each face in an image and will return the bounding boxes together with it's probability for each face. This face detector is aiming towards obtaining high accuracy in detecting face bounding boxes instead of low inference time. The size of the quantized model is about 5.4 MB (**ssd_mobilenetv1_model**).
The face detection model has been trained on the [WIDERFACE dataset](http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/) and the weights are provided by [yeephycho](https://github.com/yeephycho) in [this](https://github.com/yeephycho/tensorflow-face-detection) repo.
<a name="about-face-detection-yolo"></a>
### Tiny Face Detector
### Face Detection - Tiny Yolo v2
The Tiny Face Detector is a very performant, realtime face detector, which is much faster, smaller and less resource consuming compared to the SSD Mobilenet V1 face detector, in return it performs slightly less well on detecting small faces. This model is extremely mobile and web friendly, thus it should be your GO-TO face detector on mobile devices and resource limited clients. The size of the quantized model is only 190 KB (**tiny_face_detector_model**).
The Tiny Yolo v2 implementation is a very performant face detector, which can easily adapt to different input image sizes, thus can be used as an alternative to SSD Mobilenet v1 to trade off accuracy for performance (inference time). In general the models ability to locate smaller face bounding boxes is not as accurate as SSD Mobilenet v1.
The face detector has been trained on a custom dataset of ~14K images labeled with bounding boxes. Furthermore the model has been trained to predict bounding boxes, which entirely cover facial feature points, thus it in general produces better results in combination with subsequent face landmark detection than SSD Mobilenet V1.
The face detector has been trained on a custom dataset of ~10K images labeled with bounding boxes and uses depthwise separable convolutions instead of regular convolutions, which ensures very fast inference and allows to have a quantized model size of only 1.7MB making the model extremely mobile and web friendly. Thus, the Tiny Yolo v2 face detector should be your GO-TO face detector on mobile devices.
This model is basically an even tinier version of Tiny Yolo V2, replacing the regular convolutions of Yolo with depthwise separable convolutions. Yolo is fully convolutional, thus can easily adapt to different input image sizes to trade off accuracy for performance (inference time).
<a name="about-face-detection-mtcnn"></a>
### MTCNN
### Face Detection & 5 Point Face Landmarks - MTCNN
**Note, this model is mostly kept in this repo for experimental reasons. In general the other face detectors should perform better, but of course you are free to play around with MTCNN.**
MTCNN (Multi-task Cascaded Convolutional Neural Networks) represents an alternative face detector to SSD Mobilenet v1 and Tiny Yolo v2, which offers much more room for configuration. By tuning the input parameters, MTCNN is able to detect a wide range of face bounding box sizes. MTCNN is a 3 stage cascaded CNN, which simultaneously returns 5 face landmark points along with the bounding boxes and scores for each face. By limiting the minimum size of faces expected in an image, MTCNN allows you to process frames from your webcam in realtime. Additionally with the model size is only 2MB.
MTCNN (Multi-task Cascaded Convolutional Neural Networks) represents an alternative face detector to SSD Mobilenet v1 and Tiny Yolo v2, which offers much more room for configuration. By tuning the input parameters, MTCNN should be able to detect a wide range of face bounding box sizes. MTCNN is a 3 stage cascaded CNN, which simultaneously returns 5 face landmark points along with the bounding boxes and scores for each face. Additionally the model size is only 2MB.
MTCNN has been presented in the paper [Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks](https://kpzhang93.github.io/MTCNN_face_detection_alignment/paper/spl.pdf) by Zhang et al. and the model weights are provided in the official [repo](https://github.com/kpzhang93/MTCNN_face_detection_alignment) of the MTCNN implementation.
<a name="about-face-recognition"></a>
### Face Recognition
<a name="models-face-landmark-detection"></a>
For face recognition, a ResNet-34 like architecture is implemented to compute a face descriptor (a feature vector with 128 values) from any given face image, which is used to describe the characteristics of a persons face. The model is **not** limited to the set of faces used for training, meaning you can use it for face recognition of any person, for example yourself. You can determine the similarity of two arbitrary faces by comparing their face descriptors, for example by computing the euclidean distance or using any other classifier of your choice.
The neural net is equivalent to the **FaceRecognizerNet** used in [face-recognition.js](https://github.com/justadudewhohacks/face-recognition.js) and the net used in the [dlib](https://github.com/davisking/dlib/blob/master/examples/dnn_face_recognition_ex.cpp) face recognition example. The weights have been trained by [davisking](https://github.com/davisking) and the model achieves a prediction accuracy of 99.38% on the LFW (Labeled Faces in the Wild) benchmark for face recognition.
## 68 Point Face Landmark Detection Models
<a name="about-face-landmark-detection"></a>
This package implements a very lightweight and fast, yet accurate 68 point face landmark detector. The default model has a size of only 350kb (**face_landmark_68_model**) and the tiny model is only 80kb (**face_landmark_68_tiny_model**). Both models employ the ideas of depthwise separable convolutions as well as densely connected blocks. The models have been trained on a dataset of ~35k face images labeled with 68 face landmark points.
### 68 Point Face Landmark Detection
<a name="models-face-recognition"></a>
This package implements a very lightweight and fast, yet accurate 68 point face landmark detector. The default model has a size of only 350kb and the tiny model is only 80kb. Both models employ the ideas of depthwise separable convolutions as well as densely connected blocks. The models have been trained on a dataset of ~35k face images labeled with 68 face landmark points.
## Face Recognition Model
<a name="usage"></a>
## Usage
Get the latest build from dist/face-api.js or dist/face-api.min.js and include the script:
For face recognition, a ResNet-34 like architecture is implemented to compute a face descriptor (a feature vector with 128 values) from any given face image, which is used to describe the characteristics of a persons face. The model is **not** limited to the set of faces used for training, meaning you can use it for face recognition of any person, for example yourself. You can determine the similarity of two arbitrary faces by comparing their face descriptors, for example by computing the euclidean distance or using any other classifier of your choice.
``` html
<script src="face-api.js"></script>
```
The neural net is equivalent to the **FaceRecognizerNet** used in [face-recognition.js](https://github.com/justadudewhohacks/face-recognition.js) and the net used in the [dlib](https://github.com/davisking/dlib/blob/master/examples/dnn_face_recognition_ex.cpp) face recognition example. The weights have been trained by [davisking](https://github.com/davisking) and the model achieves a prediction accuracy of 99.38% on the LFW (Labeled Faces in the Wild) benchmark for face recognition.
Or install the package:
The size of the quantized model is roughly 6.2 MB (**face_recognition_model**).
``` bash
npm i face-api.js
```
# Usage
<a name="usage-load-models"></a>
<a name="usage-loading-models"></a>
### Loading the Models
## Loading the Models
To load a model, you have provide the corresponding manifest.json file as well as the model weight files (shards) as assets. Simply copy them to your public or assets folder. The manifest.json and shard files of a model have to be located in the same directory / accessible under the same route.
Assuming the models reside in **public/models**:
``` javascript
await faceapi.loadFaceDetectionModel('/models')
await faceapi.loadSsdMobilenetv1Model('/models')
// accordingly for the other models:
// await faceapi.loadTinyFaceDetectorModel('/models')
// await faceapi.loadMtcnnModel('/models')
// await faceapi.loadFaceLandmarkModel('/models')
// await faceapi.loadFaceLandmarkTinyModel('/models')
// await faceapi.loadFaceRecognitionModel('/models')
// await faceapi.loadMtcnnModel('/models')
// await faceapi.loadTinyYolov2Model('/models')
```
As an alternative, you can also create instance of the neural nets:
Alternatively, you can also create instance of the neural nets:
``` javascript
const net = new faceapi.FaceDetectionNet()
// accordingly for the other models:
// const net = new faceapi.FaceLandmark68Net()
// const net = new faceapi.FaceLandmark68TinyNet()
// const net = new faceapi.FaceRecognitionNet()
// const net = new faceapi.Mtcnn()
// const net = new faceapi.TinyYolov2()
await net.load('/models/face_detection_model-weights_manifest.json')
// await net.load('/models/face_landmark_68_model-weights_manifest.json')
// await net.load('/models/face_landmark_68_tiny_model-weights_manifest.json')
// await net.load('/models/face_recognition_model-weights_manifest.json')
// await net.load('/models/mtcnn_model-weights_manifest.json')
// await net.load('/models/tiny_yolov2_separable_conv_model-weights_manifest.json')
const net = new faceapi.SsdMobilenetv1()
await net.load('/models')
```
Using instances, you can also load the weights as a Float32Array (in case you want to use the uncompressed models):
``` javascript
// using fetch
const res = await fetch('/models/face_detection_model.weights')
const weights = new Float32Array(await res.arrayBuffer())
net.load(weights)
net.load(await faceapi.fetchNetWeights('/models/face_detection_model.weights'))
// using axios
const res = await axios.get('/models/face_detection_model.weights', { responseType: 'arraybuffer' })
......@@ -179,166 +168,382 @@ const weights = new Float32Array(res.data)
net.load(weights)
```
<a name="usage-face-detection-ssd"></a>
## High Level API
### Face Detection - SSD Mobilenet v1
In the following **input** can be an HTML img, video or canvas element or the id of that element.
Detect faces and get the bounding boxes and scores:
``` html
<img id="myImg" src="images/example.png" />
<video id="myVideo" src="media/example.mp4" />
<canvas id="myCanvas" />
```
``` javascript
// optional arguments
const minConfidence = 0.8
const maxResults = 10
const input = document.getElementById('myImg')
// const input = document.getElementById('myVideo')
// const input = document.getElementById('myCanvas')
// or simply:
// const input = 'myImg'
```
// inputs can be html canvas, img or video element or their ids ...
const myImg = document.getElementById('myImg')
const detections = await faceapi.ssdMobilenetv1(myImg, minConfidence, maxResults)
### Detecting Faces
Detect all faces in an image. Returns **Array<[FaceDetection](#interface-face-detection)>**:
``` javascript
const detections = await faceapi.detectAllFaces(input)
```
Detect the face with the highest confidence score in an image. Returns **[FaceDetection](#interface-face-detection) | undefined**:
``` javascript
const detection = await faceapi.detectSingleFace(input)
```
By default **detectAllFaces** and **detectSingleFace** utilize the SSD Mobilenet V1 Face Detector. You can specify the face detector by passing the corresponding options object:
``` javascript
const detections1 = await faceapi.detectAllFaces(input, new SsdMobilenetv1Options())
const detections2 = await faceapi.detectAllFaces(input, new TinyFaceDetectorOptions())
const detections3 = await faceapi.detectAllFaces(input, new MtcnnOptions())
```
You can tune the options of each face detector as shown [here](#usage-face-detection-options).
### Detecting 68 Face Landmark Points
**After face detection, we can furthermore predict the facial landmarks for each detected face as follows:**
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[FaceDetectionWithLandmarks](#interface-face-detection-with-landmarks)>**:
``` javascript
const detectionsWithLandmarks = await faceapi.detectAllFaces(input).withFaceLandmarks()
```
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks for that face. Returns **[FaceDetectionWithLandmarks](#interface-face-detection-with-landmarks) | undefined**:
``` javascript
const detectionWithLandmarks = await faceapi.detectSingleFace(input).withFaceLandmarks()
```
You can also specify to use the tiny model instead of the default model:
``` javascript
const useTinyModel = true
const detectionsWithLandmarks = await faceapi.detectAllFaces(input).withFaceLandmarks(useTinyModel)
```
### Computing Face Descriptors
**After face detection and facial landmark prediction the face descriptors for each face can be computed as follows:**
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[FullFaceDescription](#interface-full-face-description)>**:
``` javascript
const fullFaceDescriptions = await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceDescriptors()
```
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks and face descriptor for that face. Returns **[FullFaceDescription](#interface-full-face-description) | undefined**:
``` javascript
const fullFaceDescription = await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceDescriptor()
```
### Face Recognition by Matching Descriptors
To perform face recognition, one can use faceapi.FaceMatcher to compare reference face descriptors to query face descriptors.
First, we initialize the FaceMatcher with the reference data, for example we can simply detect faces in a **referenceImage** and match the descriptors of the detected faces to faces of subsquent images:
``` javascript
const fullFaceDescriptions = await faceapi
.detectAllFaces(referenceImage)
.withFaceLandmarks()
.withFaceDescriptors()
if (!fullFaceDescriptions.length) {
return
}
// create FaceMatcher with automatically assigned labels
// from the detection results for the reference image
const faceMatcher = new faceapi.FaceMatcher(fullFaceDescriptions)
```
Now we can recognize a persons face shown in **queryImage1**:
``` javascript
const singleFullFaceDescription = await faceapi
.detectSingleFace(queryImage1)
.withFaceLandmarks()
.withFaceDescriptor()
if (singleFullFaceDescription) {
const bestMatch = faceMatcher.findBestMatch(singleFullFaceDescription.descriptor)
console.log(bestMatch.toString())
}
```
Or we can recognize all faces shown in **queryImage2**:
``` javascript
const fullFaceDescriptions = await faceapi
.detectAllFaces(queryImage2)
.withFaceLandmarks()
.withFaceDescriptors()
fullFaceDescriptions.forEach(fd => {
const bestMatch = faceMatcher.findBestMatch(fd.descriptor)
console.log(bestMatch.toString())
})
```
Draw the detected faces to a canvas:
You can also create labeled reference descriptors as follows:
``` javascript
const labeledDescriptors = [
new faceapi.LabeledFaceDescriptors(
'obama'
[descriptorObama1, descriptorObama2]
),
new faceapi.LabeledFaceDescriptors(
'trump'
[descriptorTrump]
)
]
const faceMatcher = new faceapi.FaceMatcher(labeledDescriptors)
```
<a name="usage-displaying-detection-results"></a>
## Displaying Detection Results
Drawing the detected faces into a canvas:
``` javascript
const detections = await faceapi.detectAllFaces(input)
// resize the detected boxes in case your displayed image has a different size then the original
const detectionsForSize = detections.map(det => det.forSize(myImg.width, myImg.height))
const detectionsForSize = detections.map(det => det.forSize(input.width, input.height))
// draw them into a canvas
const canvas = document.getElementById('overlay')
canvas.width = myImg.width
canvas.height = myImg.height
faceapi.drawDetection(canvas, detectionsForSize, { withScore: false })
canvas.width = input.width
canvas.height = input.height
faceapi.drawDetection(canvas, detectionsForSize, { withScore: true })
```
You can also obtain the tensors of the unfiltered bounding boxes and scores for each image in the batch (tensors have to be disposed manually):
Drawing face landmarks into a canvas:
``` javascript
const { boxes, scores } = await net.forward('myImg')
const detectionsWithLandmarks = await faceapi
.detectAllFaces(input)
.withFaceLandmarks()
// resize the detected boxes and landmarks in case your displayed image has a different size then the original
const detectionsWithLandmarksForSize = detectionsWithLandmarks.map(det => det.forSize(input.width, input.height))
// draw them into a canvas
const canvas = document.getElementById('overlay')
canvas.width = input.width
canvas.height = input.height
faceapi.drawLandmarks(canvas, detectionsWithLandmarks, { drawLines: true })
```
<a name="usage-face-detection-yolo"></a>
Finally you can also draw boxes with custom text:
### Face Detection - Tiny Yolo v2
``` javascript
const boxesWithText = [
new faceapi.BoxWithText(new faceapi.Rect(x, y, width, height), text))
new faceapi.BoxWithText(new faceapi.Rect(0, 0, 50, 50), 'some text'))
]
Detect faces and get the bounding boxes and scores:
const canvas = document.getElementById('overlay')
faceapi.drawDetection(canvas, boxesWithText)
```
<a name="usage-face-detection-options"></a>
## Face Detection Options
### SsdMobilenetv1Options
``` javascript
// defaults parameters shown:
const forwardParams = {
scoreThreshold: 0.5,
// any number or one of the predefined sizes:
// 'xs' (224 x 224) | 'sm' (320 x 320) | 'md' (416 x 416) | 'lg' (608 x 608)
inputSize: 'md'
export interface ISsdMobilenetv1Options {
// minimum confidence threshold
// default: 0.5
minConfidence?: number
// maximum number of faces to return
// default: 100
maxResults?: number
}
const detections = await faceapi.tinyYolov2(document.getElementById('myImg'), forwardParams)
// example
const options = new SsdMobilenetv1Options({ minConfidence: 0.8 })
```
<a name="usage-face-detection-mtcnn"></a>
### TinyFaceDetectorOptions
### Face Detection & 5 Point Face Landmarks - MTCNN
``` javascript
export interface ITinyFaceDetectorOptions {
// size at which image is processed, the smaller the faster,
// but less precise in detecting smaller faces, must be divisible
// by 32, common sizes are 128, 160, 224, 320, 416, 512, 608,
// for face tracking via webcam I would recommend using smaller sizes,
// e.g. 128, 160, for detecting smaller faces use larger sizes, e.g. 512, 608
// default: 416
inputSize?: number
// minimum confidence threshold
// default: 0.5
scoreThreshold?: number
}
Detect faces and get the bounding boxes and scores:
// example
const options = new TinyFaceDetectorOptions({ inputSize: 320 })
```
### MtcnnOptions
``` javascript
// defaults parameters shown:
const forwardParams = {
export interface IMtcnnOptions {
// minimum face size to expect, the higher the faster processing will be,
// but smaller faces won't be detected
// default: 20
minFaceSize?: number
// the score threshold values used to filter the bounding
// boxes of stage 1, 2 and 3
// default: [0.6, 0.7, 0.7]
scoreThresholds?: number[]
// scale factor used to calculate the scale steps of the image
// pyramid used in stage 1
// default: 0.709
scaleFactor?: number
// number of scaled versions of the input image passed through the CNN
// of the first stage, lower numbers will result in lower inference time,
// but will also be less accurate
maxNumScales: 10,
// scale factor used to calculate the scale steps of the image
// pyramid used in stage 1
scaleFactor: 0.709,
// the score threshold values used to filter the bounding
// boxes of stage 1, 2 and 3
scoreThresholds: [0.6, 0.7, 0.7],
// minimum face size to expect, the higher the faster processing will be,
// but smaller faces won't be detected
minFaceSize: 20
// default: 10
maxNumScales?: number
// instead of specifying scaleFactor and maxNumScales you can also
// set the scaleSteps manually
scaleSteps?: number[]
}
const results = await faceapi.mtcnn(document.getElementById('myImg'), forwardParams)
// example
const options = new MtcnnOptions({ minFaceSize: 100, scaleFactor: 0.8 })
```
Alternatively you can also specify the scale steps manually:
<a name="usage-utility-classes"></a>
## Utility Classes
### IBox
``` javascript
const forwardParams = {
scaleSteps: [0.4, 0.2, 0.1, 0.05]
export interface IBox {
x: number
y: number
width: number
height: number
}
const results = await faceapi.mtcnn(document.getElementById('myImg'), forwardParams)
```
Finally you can draw the returned bounding boxes and 5 Point Face Landmarks into a canvas:
<a name="interface-face-detection"></a>
### IFaceDetection
``` javascript
const minConfidence = 0.9
export interface IFaceDetection {
score: number
box: Box
}
```
if (results) {
results.forEach(({ faceDetection, faceLandmarks }) => {
<a name="interface-face-landmarks"></a>
// ignore results with low confidence score
if (faceDetection.score < minConfidence) {
return
}
### IFaceLandmarks
faceapi.drawDetection('overlay', faceDetection)
faceapi.drawLandmarks('overlay', faceLandmarks)
})
``` javascript
export interface IFaceLandmarks {
positions: Point[]
shift: Point
}
```
<a name="usage-face-recognition"></a>
### Face Recognition
<a name="interface-face-detection-with-landmarks"></a>
Compute and compare the descriptors of two face images:
### IFaceDetectionWithLandmarks
``` javascript
// inputs can be html canvas, img or video element or their ids ...
const descriptor1 = await faceapi.computeFaceDescriptor('myImg')
const descriptor2 = await faceapi.computeFaceDescriptor(document.getElementById('myCanvas'))
const distance = faceapi.euclideanDistance(descriptor1, descriptor2)
export interface IFaceDetectionWithLandmarks {
detection: FaceDetection
landmarks: FaceLandmarks
}
```
if (distance < 0.6)
console.log('match')
else
console.log('no match')
<a name="interface-full-face-description"></a>
### IFullFaceDescription
``` javascript
export interface IFullFaceDescription extends IFaceDetectionWithLandmarks {
descriptor: Float32Array
}
```
Or simply obtain the tensor (tensor has to be disposed manually):
<a name="other-useful-utility"></a>
## Other Useful Utility
### Using the Low Level API
Instead of using the high level API, you can directly use the forward methods of each neural network:
``` javascript
const t = await net.forward('myImg')
const detections1 = await faceapi.ssdMobilenetv1(input, options)
const detections2 = await faceapi.tinyFaceDetector(input, options)
const detections3 = await faceapi.mtcnn(input, options)
const landmarks1 = await faceapi.detectFaceLandmarks(faceImage)
const landmarks2 = await faceapi.detectFaceLandmarksTiny(faceImage)
const descriptor = await faceapi.computeFaceDescriptor(alignedFaceImage)
```
<a name="usage-face-landmark-detection"></a>
All global neural network instances are exported via faceapi.nets:
### Face Landmark Detection
``` javascript
console.log(faceapi.nets)
```
Detect face landmarks:
### Extracting a Canvas for an Image Region
``` javascript
// inputs can be html canvas, img or video element or their ids ...
const myImg = document.getElementById('myImg')
const landmarks = await faceapi.detectLandmarks(myImg)
const regionsToExtract = [
new faceapi.Rect(0, 0, 100, 100)
]
// actually extractFaces is meant to extract face regions from bounding boxes
// but you can also use it to extract any other region
const canvases = await faceapi.extractFaces(input, regionsToExtract)
```
Draw the detected face landmarks to a canvas:
### Euclidean Distance
``` javascript
// adjust the landmark positions in case your displayed image has a different size then the original
const landmarksForSize = landmarks.forSize(myImg.width, myImg.height)
const canvas = document.getElementById('overlay')
canvas.width = myImg.width
canvas.height = myImg.height
faceapi.drawLandmarks(canvas, landmarksForSize, { drawLines: true })
// ment to be used for computing the euclidean distance between two face descriptors
const dist = faceapi.euclideanDistance([0, 0], [0, 10])
console.log(dist) // 10
```
Retrieve the face landmark positions:
### Retrieve the Face Landmark Points and Contours
``` javascript
const landmarkPositions = landmarks.getPositions()
// or get the positions of individual contours
// or get the positions of individual contours,
// only available for 68 point face ladnamrks (FaceLandmarks68)
const jawOutline = landmarks.getJawOutline()
const nose = landmarks.getNose()
const mouth = landmarks.getMouth()
......@@ -348,48 +553,52 @@ const leftEyeBbrow = landmarks.getLeftEyeBrow()
const rightEyeBrow = landmarks.getRightEyeBrow()
```
Compute the Face Landmarks for Detected Faces:
``` javascript
const detections = await faceapi.ssdMobilenetv1(input)
### Fetch and Display Images from an URL
// get the face tensors from the image (have to be disposed manually)
const faceTensors = await faceapi.extractFaceTensors(input, detections)
const landmarksByFace = await Promise.all(faceTensors.map(t => faceapi.detectLandmarks(t)))
// free memory for face image tensors after we computed their descriptors
faceTensors.forEach(t => t.dispose())
``` html
<img id="myImg" src="">
```
<a name="shortcut-functions"></a>
``` javascript
const image = await faceapi.fetchImage('/images/example.png')
### Shortcut Functions for Full Face Description
console.log(image instanceof HTMLImageElement) // true
After face detection has been performed, I would recommend to align the bounding boxes of the detected faces before passing them to the face recognition net, which will make the computed face descriptor much more accurate. Fortunately, the api can do this for you under the hood by providing convenient shortcut functions. You can obtain the full face descriptions (location, landmarks and descriptor) of each face in an input image as follows.
// displaying the fetched image content
const myImg = document.getElementById('myImg')
myImg.src = image.src
```
Using the SSD Mobilenet v1 face detector + 68 point face landmark detector:
### Fetching JSON
``` javascript
const fullFaceDescriptions = await faceapi.allFacesSsdMobilenetv1(input, minConfidence)
const json = await faceapi.fetchJson('/files/example.json')
```
Using the Tiny Yolo v2 face detector + 68 point face landmark detector:
### Creating an Image Picker
``` javascript
const fullFaceDescriptions = await faceapi.allFacesTinyYolov2(input, { inputSize: 'md' })
``` html
<img id="myImg" src="">
<input id="myFileUpload" type="file" onchange="uploadImage()" accept=".jpg, .jpeg, .png">
```
Or with MTCNN face detection + 5 point face landmarks:
``` javascript
const fullFaceDescriptions = await faceapi.allFacesMtcnn(input, { minFaceSize: 20 })
async function uploadImage() {
const imgFile = document.getElementById('myFileUpload').files[0]
// create an HTMLImageElement from a Blob
const img = await faceapi.bufferToImage(imgFile)
document.getElementById('myImg').src = img.src
}
```
The shortcut functions return an array of FullFaceDescriptions:
### Creating a Canvas Element from an Image or Video Element
``` html
<img id="myImg" src="images/example.png" />
<video id="myVideo" src="media/example.mp4" />
```
``` javascript
const fullFaceDescription0 = fullFaceDescriptions[0]
console.log(fullFaceDescription0.detection) // bounding box & score
console.log(fullFaceDescription0.landmarks) // face landmarks
console.log(fullFaceDescription0.descriptor) // face descriptor
const canvas1 = faceapi.createCanvasFromMedia(document.getElementById('myImg'))
const canvas2 = faceapi.createCanvasFromMedia(document.getElementById('myVideo'))
```
\ No newline at end of file
const classes = ['amy', 'bernadette', 'howard', 'leonard', 'penny', 'raj', 'sheldon', 'stuart']
function getImageUri(imageName) {
return `images/${imageName}`
}
function getFaceImageUri(className, idx) {
return `images/${className}/${className}${idx}.png`
}
async function fetchImage(uri) {
return (await fetch(uri)).blob()
}
async function requestExternalImage(imageUrl) {
const res = await fetch('fetch_external_image', {
method: 'post',
headers: {
'content-type': 'application/json'
},
body: JSON.stringify({ imageUrl })
})
if (!(res.status < 400)) {
console.error(res.status + ' : ' + await res.text())
throw new Error('failed to fetch image from url: ' + imageUrl)
}
let blob
try {
blob = await res.blob()
return await faceapi.bufferToImage(blob)
} catch (e) {
console.error('received blob:', blob)
console.error('error:', e)
throw new Error('failed to load image from url: ' + imageUrl)
}
}
// fetch first image of each class and compute their descriptors
async function initTrainDescriptorsByClass(net, numImagesForTraining = 1) {
const maxAvailableImagesPerClass = 5
numImagesForTraining = Math.min(numImagesForTraining, maxAvailableImagesPerClass)
return Promise.all(classes.map(
async className => {
const descriptors = []
for (let i = 1; i < (numImagesForTraining + 1); i++) {
const img = await faceapi.bufferToImage(
await fetchImage(getFaceImageUri(className, i))
)
descriptors.push(await net.computeFaceDescriptor(img))
}
return {
descriptors,
className
}
}
))
}
function getBestMatch(descriptorsByClass, queryDescriptor) {
function computeMeanDistance(descriptorsOfClass) {
return faceapi.round(
descriptorsOfClass
.map(d => faceapi.euclideanDistance(d, queryDescriptor))
.reduce((d1, d2) => d1 + d2, 0)
/ (descriptorsOfClass.length || 1)
)
}
return descriptorsByClass
.map(
({ descriptors, className }) => ({
distance: computeMeanDistance(descriptors),
className
})
)
.reduce((best, curr) => best.distance < curr.distance ? best : curr)
}
function renderNavBar(navbarId, exampleUri) {
const examples = [
{
uri: 'face_detection',
name: 'Face Detection'
},
{
uri: 'face_detection_video',
name: 'Face Detection Video'
},
{
uri: 'face_recognition',
name: 'Face Recognition'
},
{
uri: 'face_similarity',
name: 'Face Similarity'
},
{
uri: 'face_landmarks',
name: 'Face Landmarks'
},
{
uri: 'detect_and_draw_landmarks',
name: 'Detect and Draw Landmarks'
},
{
uri: 'detect_and_draw_faces',
name: 'Detect and Draw Faces'
},
{
uri: 'face_alignment',
name: 'Face Alignment'
},
{
uri: 'detect_and_recognize_faces',
name: 'Detect and Recognize Faces'
},
{
uri: 'mtcnn_face_detection',
name: 'MTCNN Face Detection'
},
{
uri: 'mtcnn_face_detection_video',
name: 'MTCNN Face Detection Video'
},
{
uri: 'mtcnn_face_detection_webcam',
name: 'MTCNN Face Detection Webcam'
},
{
uri: 'mtcnn_face_recognition',
name: 'MTCNN Face Recognition'
},
{
uri: 'mtcnn_face_recognition_webcam',
name: 'MTCNN Face Recognition Webcam'
},
{
uri: 'tiny_yolov2_face_detection',
name: 'Tiny Yolov2 Face Detection'
},
{
uri: 'tiny_yolov2_face_detection_video',
name: 'Tiny Yolov2 Face Detection Video'
},
{
uri: 'tiny_yolov2_face_detection_webcam',
name: 'Tiny Yolov2 Face Detection Webcam'
},
{
uri: 'tiny_yolov2_face_recognition',
name: 'Tiny Yolov2 Face Recognition'
},
{
uri: 'batch_face_landmarks',
name: 'Batch Face Landmarks'
},
{
uri: 'batch_face_recognition',
name: 'Batch Face Recognition'
}
]
const navbar = $(navbarId).get(0)
const pageContainer = $('.page-container').get(0)
const header = document.createElement('h3')
header.innerHTML = examples.find(ex => ex.uri === exampleUri).name
pageContainer.insertBefore(header, pageContainer.children[0])
const menuContent = document.createElement('ul')
menuContent.id = 'slide-out'
menuContent.classList.add('side-nav', 'fixed')
navbar.appendChild(menuContent)
const menuButton = document.createElement('a')
menuButton.href='#'
menuButton.classList.add('button-collapse', 'show-on-large')
menuButton.setAttribute('data-activates', 'slide-out')
const menuButtonIcon = document.createElement('img')
menuButtonIcon.src = 'menu_icon.png'
menuButton.appendChild(menuButtonIcon)
navbar.appendChild(menuButton)
const li = document.createElement('li')
const githubLink = document.createElement('a')
githubLink.classList.add('waves-effect', 'waves-light', 'side-by-side')
githubLink.id = 'github-link'
githubLink.href = 'https://github.com/justadudewhohacks/face-api.js'
const h5 = document.createElement('h5')
h5.innerHTML = 'face-api.js'
githubLink.appendChild(h5)
const githubLinkIcon = document.createElement('img')
githubLinkIcon.src = 'github_link_icon.png'
githubLink.appendChild(githubLinkIcon)
li.appendChild(githubLink)
menuContent.appendChild(li)
examples
.forEach(ex => {
const li = document.createElement('li')
if (ex.uri === exampleUri) {
li.style.background='#b0b0b0'
}
const a = document.createElement('a')
a.classList.add('waves-effect', 'waves-light')
a.href = ex.uri
const span = document.createElement('span')
span.innerHTML = ex.name
span.style.whiteSpace = 'nowrap'
a.appendChild(span)
li.appendChild(a)
menuContent.appendChild(li)
})
$('.button-collapse').sideNav({
menuWidth: 280
})
}
function renderSelectList(selectListId, onChange, initialValue, renderChildren) {
const select = document.createElement('select')
$(selectListId).get(0).appendChild(select)
renderChildren(select)
$(select).val(initialValue)
$(select).on('change', (e) => onChange(e.target.value))
$(select).material_select()
}
function renderOption(parent, text, value) {
const option = document.createElement('option')
option.innerHTML = text
option.value = value
parent.appendChild(option)
}
function renderFaceImageSelectList(selectListId, onChange, initialValue) {
const indices = [1, 2, 3, 4, 5]
function renderChildren(select) {
classes.forEach(className => {
const optgroup = document.createElement('optgroup')
optgroup.label = className
select.appendChild(optgroup)
indices.forEach(imageIdx =>
renderOption(
optgroup,
`${className} ${imageIdx}`,
getFaceImageUri(className, imageIdx)
)
)
})
}
renderSelectList(
selectListId,
onChange,
getFaceImageUri(initialValue.className, initialValue.imageIdx),
renderChildren
)
}
function renderImageSelectList(selectListId, onChange, initialValue) {
const images = [1, 2, 3, 4, 5].map(idx => `bbt${idx}.jpg`)
function renderChildren(select) {
images.forEach(imageName =>
renderOption(
select,
imageName,
getImageUri(imageName)
)
)
}
renderSelectList(
selectListId,
onChange,
getImageUri(initialValue),
renderChildren
)
}
\ No newline at end of file
const classes = ['amy', 'bernadette', 'howard', 'leonard', 'penny', 'raj', 'sheldon', 'stuart']
function getFaceImageUri(className, idx) {
return `images/${className}/${className}${idx}.png`
}
function renderFaceImageSelectList(selectListId, onChange, initialValue) {
const indices = [1, 2, 3, 4, 5]
function renderChildren(select) {
classes.forEach(className => {
const optgroup = document.createElement('optgroup')
optgroup.label = className
select.appendChild(optgroup)
indices.forEach(imageIdx =>
renderOption(
optgroup,
`${className} ${imageIdx}`,
getFaceImageUri(className, imageIdx)
)
)
})
}
renderSelectList(
selectListId,
onChange,
getFaceImageUri(initialValue.className, initialValue.imageIdx),
renderChildren
)
}
// fetch first image of each class and compute their descriptors
async function createBbtFaceMatcher(numImagesForTraining = 1) {
const maxAvailableImagesPerClass = 5
numImagesForTraining = Math.min(numImagesForTraining, maxAvailableImagesPerClass)
const labeledFaceDescriptors = await Promise.all(classes.map(
async className => {
const descriptors = []
for (let i = 1; i < (numImagesForTraining + 1); i++) {
const img = await faceapi.fetchImage(getFaceImageUri(className, i))
descriptors.push(await faceapi.computeFaceDescriptor(img))
}
return new faceapi.LabeledFaceDescriptors(
className,
descriptors
)
}
))
return new faceapi.FaceMatcher(labeledFaceDescriptors)
}
\ No newline at end of file
function getImageUri(imageName) {
return `images/${imageName}`
}
async function requestExternalImage(imageUrl) {
const res = await fetch('fetch_external_image', {
method: 'post',
headers: {
'content-type': 'application/json'
},
body: JSON.stringify({ imageUrl })
})
if (!(res.status < 400)) {
console.error(res.status + ' : ' + await res.text())
throw new Error('failed to fetch image from url: ' + imageUrl)
}
let blob
try {
blob = await res.blob()
return await faceapi.bufferToImage(blob)
} catch (e) {
console.error('received blob:', blob)
console.error('error:', e)
throw new Error('failed to load image from url: ' + imageUrl)
}
}
function renderNavBar(navbarId, exampleUri) {
const examples = [
{
uri: 'face_and_landmark_detection',
name: 'Face And Landmark Detection'
},
{
uri: 'face_recognition',
name: 'Face Recognition'
},
{
uri: 'face_extraction',
name: 'Face Extraction'
},
{
uri: 'video_face_tracking',
name: 'Video Face Tracking'
},
{
uri: 'webcam_face_tracking',
name: 'Webcam Face Tracking'
},
{
uri: 'bbt_face_landmark_detection',
name: 'BBT Face Landmark Detection'
},
{
uri: 'bbt_face_similarity',
name: 'BBT Face Similarity'
},
{
uri: 'bbt_face_matching',
name: 'BBT Face Matching'
},
{
uri: 'bbt_face_recognition',
name: 'BBT Face Recognition'
},
{
uri: 'batch_face_landmarks',
name: 'Batch Face Landmark Detection'
},
{
uri: 'batch_face_recognition',
name: 'Batch Face Recognition'
}
]
const navbar = $(navbarId).get(0)
const pageContainer = $('.page-container').get(0)
const header = document.createElement('h3')
header.innerHTML = examples.find(ex => ex.uri === exampleUri).name
pageContainer.insertBefore(header, pageContainer.children[0])
const menuContent = document.createElement('ul')
menuContent.id = 'slide-out'
menuContent.classList.add('side-nav', 'fixed')
navbar.appendChild(menuContent)
const menuButton = document.createElement('a')
menuButton.href='#'
menuButton.classList.add('button-collapse', 'show-on-large')
menuButton.setAttribute('data-activates', 'slide-out')
const menuButtonIcon = document.createElement('img')
menuButtonIcon.src = 'menu_icon.png'
menuButton.appendChild(menuButtonIcon)
navbar.appendChild(menuButton)
const li = document.createElement('li')
const githubLink = document.createElement('a')
githubLink.classList.add('waves-effect', 'waves-light', 'side-by-side')
githubLink.id = 'github-link'
githubLink.href = 'https://github.com/justadudewhohacks/face-api.js'
const h5 = document.createElement('h5')
h5.innerHTML = 'face-api.js'
githubLink.appendChild(h5)
const githubLinkIcon = document.createElement('img')
githubLinkIcon.src = 'github_link_icon.png'
githubLink.appendChild(githubLinkIcon)
li.appendChild(githubLink)
menuContent.appendChild(li)
examples
.forEach(ex => {
const li = document.createElement('li')
if (ex.uri === exampleUri) {
li.style.background='#b0b0b0'
}
const a = document.createElement('a')
a.classList.add('waves-effect', 'waves-light')
a.href = ex.uri
const span = document.createElement('span')
span.innerHTML = ex.name
span.style.whiteSpace = 'nowrap'
a.appendChild(span)
li.appendChild(a)
menuContent.appendChild(li)
})
$('.button-collapse').sideNav({
menuWidth: 240
})
}
function renderSelectList(selectListId, onChange, initialValue, renderChildren) {
const select = document.createElement('select')
$(selectListId).get(0).appendChild(select)
renderChildren(select)
$(select).val(initialValue)
$(select).on('change', (e) => onChange(e.target.value))
$(select).material_select()
}
function renderOption(parent, text, value) {
const option = document.createElement('option')
option.innerHTML = text
option.value = value
parent.appendChild(option)
}
\ No newline at end of file
function resizeCanvasAndResults(dimensions, canvas, results) {
const { width, height } = dimensions instanceof HTMLVideoElement
? faceapi.getMediaDimensions(dimensions)
: dimensions
canvas.width = width
canvas.height = height
// resize detections (and landmarks) in case displayed image is smaller than
// original size
return results.map(res => res.forSize(width, height))
}
function drawDetections(dimensions, canvas, detections) {
const resizedDetections = resizeCanvasAndResults(dimensions, canvas, detections)
faceapi.drawDetection(canvas, resizedDetections)
}
function drawLandmarks(dimensions, canvas, results, withBoxes = true) {
const resizedResults = resizeCanvasAndResults(dimensions, canvas, results)
if (withBoxes) {
faceapi.drawDetection(canvas, resizedResults.map(det => det.detection))
}
const faceLandmarks = resizedResults.map(det => det.landmarks)
const drawLandmarksOptions = {
lineWidth: 2,
drawLines: true,
color: 'green'
}
faceapi.drawLandmarks(canvas, faceLandmarks, drawLandmarksOptions)
}
\ No newline at end of file
const SSD_MOBILENETV1 = 'ssd_mobilenetv1'
const TINY_FACE_DETECTOR = 'tiny_face_detector'
const MTCNN = 'mtcnn'
let selectedFaceDetector = SSD_MOBILENETV1
// ssd_mobilenetv1 options
let minConfidence = 0.5
// tiny_face_detector options
let inputSize = 512
let scoreThreshold = 0.5
//mtcnn options
let minFaceSize = 20
function getFaceDetectorOptions() {
return selectedFaceDetector === SSD_MOBILENETV1
? new faceapi.SsdMobilenetv1Options({ minConfidence })
: (
selectedFaceDetector === TINY_FACE_DETECTOR
? new faceapi.TinyFaceDetectorOptions({ inputSize, scoreThreshold })
: new faceapi.MtcnnOptions({ minFaceSize })
)
}
function onIncreaseMinConfidence() {
minConfidence = Math.min(faceapi.round(minConfidence + 0.1), 1.0)
$('#minConfidence').val(minConfidence)
updateResults()
}
function onDecreaseMinConfidence() {
minConfidence = Math.max(faceapi.round(minConfidence - 0.1), 0.1)
$('#minConfidence').val(minConfidence)
updateResults()
}
function onInputSizeChanged(e) {
changeInputSize(e.target.value)
updateResults()
}
function changeInputSize(size) {
inputSize = parseInt(size)
const inputSizeSelect = $('#inputSize')
inputSizeSelect.val(inputSize)
inputSizeSelect.material_select()
}
function onIncreaseScoreThreshold() {
scoreThreshold = Math.min(faceapi.round(scoreThreshold + 0.1), 1.0)
$('#scoreThreshold').val(scoreThreshold)
updateResults()
}
function onDecreaseScoreThreshold() {
scoreThreshold = Math.max(faceapi.round(scoreThreshold - 0.1), 0.1)
$('#scoreThreshold').val(scoreThreshold)
updateResults()
}
function onIncreaseMinFaceSize() {
minFaceSize = Math.min(faceapi.round(minFaceSize + 20), 300)
$('#minFaceSize').val(minFaceSize)
}
function onDecreaseMinFaceSize() {
minFaceSize = Math.max(faceapi.round(minFaceSize - 20), 50)
$('#minFaceSize').val(minFaceSize)
}
function getCurrentFaceDetectionNet() {
if (selectedFaceDetector === SSD_MOBILENETV1) {
return faceapi.nets.ssdMobilenetv1
}
if (selectedFaceDetector === TINY_FACE_DETECTOR) {
return faceapi.nets.tinyFaceDetector
}
if (selectedFaceDetector === MTCNN) {
return faceapi.nets.mtcnn
}
}
function isFaceDetectionModelLoaded() {
return !!getCurrentFaceDetectionNet().params
}
async function changeFaceDetector(detector) {
['#ssd_mobilenetv1_controls', '#tiny_face_detector_controls', '#mtcnn_controls']
.forEach(id => $(id).hide())
selectedFaceDetector = detector
const faceDetectorSelect = $('#selectFaceDetector')
faceDetectorSelect.val(detector)
faceDetectorSelect.material_select()
$('#loader').show()
if (!isFaceDetectionModelLoaded()) {
await getCurrentFaceDetectionNet().load('/')
}
$(`#${detector}_controls`).show()
$('#loader').hide()
}
async function onSelectedFaceDetectorChanged(e) {
selectedFaceDetector = e.target.value
await changeFaceDetector(e.target.value)
updateResults()
}
function initFaceDetectionControls() {
const faceDetectorSelect = $('#selectFaceDetector')
faceDetectorSelect.val(selectedFaceDetector)
faceDetectorSelect.on('change', onSelectedFaceDetectorChanged)
faceDetectorSelect.material_select()
const inputSizeSelect = $('#inputSize')
inputSizeSelect.val(inputSize)
inputSizeSelect.on('change', onInputSizeChanged)
inputSizeSelect.material_select()
}
\ No newline at end of file
async function onSelectedImageChanged(uri) {
const img = await faceapi.fetchImage(uri)
$(`#inputImg`).get(0).src = img.src
updateResults()
}
async function loadImageFromUrl(url) {
const img = await requestExternalImage($('#imgUrlInput').val())
$('#inputImg').get(0).src = img.src
updateResults()
}
function renderImageSelectList(selectListId, onChange, initialValue) {
const images = [1, 2, 3, 4, 5].map(idx => `bbt${idx}.jpg`)
function renderChildren(select) {
images.forEach(imageName =>
renderOption(
select,
imageName,
getImageUri(imageName)
)
)
}
renderSelectList(
selectListId,
onChange,
getImageUri(initialValue),
renderChildren
)
}
function initImageSelectionControls() {
renderImageSelectList(
'#selectList',
async (uri) => {
await onSelectedImageChanged(uri)
},
'bbt1.jpg'
)
onSelectedImageChanged($('#selectList select').val())
}
\ No newline at end of file
......@@ -3,7 +3,7 @@
right: 0;
margin: auto;
margin-top: 20px;
padding-left: 300px;
padding-left: 260px;
display: inline-flex !important;
}
......@@ -61,13 +61,7 @@
margin-bottom: 10px;
}
#overlay {
position: absolute;
top: 0;
left: 0;
}
.overlay {
#overlay, .overlay {
position: absolute;
top: 0;
left: 0;
......
......@@ -11,28 +11,20 @@ const viewsDir = path.join(__dirname, 'views')
app.use(express.static(viewsDir))
app.use(express.static(path.join(__dirname, './public')))
app.use(express.static(path.join(__dirname, '../weights')))
app.use(express.static(path.join(__dirname, '../weights_uncompressed')))
app.use(express.static(path.join(__dirname, '../dist')))
app.use(express.static(path.join(__dirname, './node_modules/axios/dist')))
app.get('/', (req, res) => res.redirect('/face_detection'))
app.get('/face_detection', (req, res) => res.sendFile(path.join(viewsDir, 'faceDetection.html')))
app.get('/face_detection_video', (req, res) => res.sendFile(path.join(viewsDir, 'faceDetectionVideo.html')))
app.get('/', (req, res) => res.redirect('/face_and_landmark_detection'))
app.get('/face_and_landmark_detection', (req, res) => res.sendFile(path.join(viewsDir, 'faceAndLandmarkDetection.html')))
app.get('/face_extraction', (req, res) => res.sendFile(path.join(viewsDir, 'faceExtraction.html')))
app.get('/face_recognition', (req, res) => res.sendFile(path.join(viewsDir, 'faceRecognition.html')))
app.get('/face_similarity', (req, res) => res.sendFile(path.join(viewsDir, 'faceSimilarity.html')))
app.get('/face_landmarks', (req, res) => res.sendFile(path.join(viewsDir, 'faceLandmarks.html')))
app.get('/detect_and_draw_faces', (req, res) => res.sendFile(path.join(viewsDir, 'detectAndDrawFaces.html')))
app.get('/detect_and_draw_landmarks', (req, res) => res.sendFile(path.join(viewsDir, 'detectAndDrawLandmarks.html')))
app.get('/face_alignment', (req, res) => res.sendFile(path.join(viewsDir, 'faceAlignment.html')))
app.get('/detect_and_recognize_faces', (req, res) => res.sendFile(path.join(viewsDir, 'detectAndRecognizeFaces.html')))
app.get('/mtcnn_face_detection', (req, res) => res.sendFile(path.join(viewsDir, 'mtcnnFaceDetection.html')))
app.get('/mtcnn_face_detection_video', (req, res) => res.sendFile(path.join(viewsDir, 'mtcnnFaceDetectionVideo.html')))
app.get('/mtcnn_face_detection_webcam', (req, res) => res.sendFile(path.join(viewsDir, 'mtcnnFaceDetectionWebcam.html')))
app.get('/mtcnn_face_recognition', (req, res) => res.sendFile(path.join(viewsDir, 'mtcnnFaceRecognition.html')))
app.get('/mtcnn_face_recognition_webcam', (req, res) => res.sendFile(path.join(viewsDir, 'mtcnnFaceRecognitionWebcam.html')))
app.get('/tiny_yolov2_face_detection', (req, res) => res.sendFile(path.join(viewsDir, 'tinyYolov2FaceDetection.html')))
app.get('/tiny_yolov2_face_detection_video', (req, res) => res.sendFile(path.join(viewsDir, 'tinyYolov2FaceDetectionVideo.html')))
app.get('/tiny_yolov2_face_detection_webcam', (req, res) => res.sendFile(path.join(viewsDir, 'tinyYolov2FaceDetectionWebcam.html')))
app.get('/tiny_yolov2_face_recognition', (req, res) => res.sendFile(path.join(viewsDir, 'tinyYolov2FaceRecognition.html')))
app.get('/video_face_tracking', (req, res) => res.sendFile(path.join(viewsDir, 'videoFaceTracking.html')))
app.get('/webcam_face_tracking', (req, res) => res.sendFile(path.join(viewsDir, 'webcamFaceTracking.html')))
app.get('/bbt_face_landmark_detection', (req, res) => res.sendFile(path.join(viewsDir, 'bbtFaceLandmarkDetection.html')))
app.get('/bbt_face_similarity', (req, res) => res.sendFile(path.join(viewsDir, 'bbtFaceSimilarity.html')))
app.get('/bbt_face_matching', (req, res) => res.sendFile(path.join(viewsDir, 'bbtFaceMatching.html')))
app.get('/bbt_face_recognition', (req, res) => res.sendFile(path.join(viewsDir, 'bbtFaceRecognition.html')))
app.get('/batch_face_landmarks', (req, res) => res.sendFile(path.join(viewsDir, 'batchFaceLandmarks.html')))
app.get('/batch_face_recognition', (req, res) => res.sendFile(path.join(viewsDir, 'batchFaceRecognition.html')))
......
......@@ -2,7 +2,8 @@
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<script src="js/commons.js"></script>
<script src="js/bbt.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
......@@ -97,7 +98,7 @@
.reduce((flat, arr) => flat.concat(arr))
images = await Promise.all(allImgUris.map(
async uri => faceapi.bufferToImage(await fetchImage(uri))
async uri => faceapi.fetchImage(uri)
))
// warmup
await measureTimings()
......
......@@ -2,7 +2,8 @@
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<script src="js/commons.js"></script>
<script src="js/bbt.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
......@@ -28,7 +29,7 @@
<div class="row side-by-side">
<div>
<label for="numImages">Num Images:</label>
<input id="numImages" type="text" class="bold" value="32"/>
<input id="numImages" type="text" class="bold" value="16"/>
</div>
<button
class="waves-effect waves-light btn"
......@@ -47,13 +48,12 @@
<script>
let images = []
let trainDescriptorsByClass = []
let descriptorsByFace = []
let numImages = 32
let faceMatcher = null
let numImages = 16
let maxDistance = 0.6
function onNumImagesChanged(e) {
const val = parseInt(e.target.value) || 32
const val = parseInt(e.target.value) || 16
numImages = Math.min(Math.max(val, 0), 32)
e.target.value = numImages
}
......@@ -67,15 +67,12 @@
const canvas = faceapi.createCanvasFromMedia(img)
$('#faceContainer').append(canvas)
const bestMatch = getBestMatch(trainDescriptorsByClass, descriptor)
const text = `${bestMatch.distance < maxDistance ? bestMatch.className : 'unkown'} (${bestMatch.distance})`
const x = 20, y = canvas.height - 20
faceapi.drawText(
canvas.getContext('2d'),
x,
y,
text,
faceMatcher.findBestMatch(descriptor).toString(),
Object.assign(faceapi.getDefaultDrawOptions(), { color: 'red', fontSize: 16 })
)
}
......@@ -105,7 +102,7 @@
async function run() {
await faceapi.loadFaceRecognitionModel('/')
trainDescriptorsByClass = await initTrainDescriptorsByClass(faceapi.recognitionNet, 1)
faceMatcher = await createBbtFaceMatcher(1)
$('#loader').hide()
const imgUris = classes
......@@ -114,7 +111,7 @@
.reduce((flat, arr) => flat.concat(arr))
images = await Promise.all(imgUris.map(
async uri => faceapi.bufferToImage(await fetchImage(uri))
async uri => faceapi.fetchImage(uri)
))
// warmup
......
......@@ -2,7 +2,8 @@
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<script src="js/commons.js"></script>
<script src="js/bbt.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
......@@ -46,8 +47,7 @@
}
async function onSelectionChanged(uri) {
const imgBuf = await fetchImage(uri)
currentImg = await faceapi.bufferToImage(imgBuf)
currentImg = await faceapi.fetchImage(uri)
landmarks = await faceapi.detectLandmarks(currentImg)
redraw()
}
......@@ -59,7 +59,7 @@
}
$(document).ready(function() {
renderNavBar('#navbar', 'face_landmarks')
renderNavBar('#navbar', 'bbt_face_landmark_detection')
renderFaceImageSelectList(
'#selectList',
onSelectionChanged,
......
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="js/commons.js"></script>
<script src="js/bbt.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div>
<div class="row center-content" id="loader">
<input disabled value="" id="status" type="text" class="bold">
<div class="progress">
<div class="indeterminate"></div>
</div>
</div>
<div class="row center-content">
<img id="face" src=""/>
</div>
<div class="row">
<label for="prediction">Prediction:</label>
<input disabled value="-" id="prediction" type="text" class="bold">
</div>
<div class="row">
<label for="time">Time:</label>
<input disabled value="-" id="time" type="text" class="bold">
</div>
<div class="row">
<label for="fps">Estimated Fps:</label>
<input disabled value="-" id="fps" type="text" class="bold">
</div>
<div class="row">
<button
class="waves-effect waves-light btn"
id="stop"
onclick="onToggleStop()"
>
Stop
</button>
<button
class="waves-effect waves-light btn"
onclick="onSlower()"
>
<i class="material-icons left">-</i> Slower
</button>
<button
class="waves-effect waves-light btn"
onclick="onFaster()"
>
<i class="material-icons left">+</i> Faster
</button>
</div>
<div class="row">
<label for="interval">Interval:</label>
<input disabled value="2000" id="interval" type="text" class="bold">
</div>
</div>
</div>
<script>
let interval = 2000
let isStop = false
let faceMatcher = null
let currImageIdx = 2, currClassIdx = 0
let to = null
function onSlower() {
interval = Math.min(interval + 100, 2000)
$('#interval').val(interval)
}
function onFaster() {
interval = Math.max(interval - 100, 0)
$('#interval').val(interval)
}
function onToggleStop() {
clearTimeout(to)
isStop = !isStop
document.getElementById('stop').innerHTML = isStop ? 'Continue' : 'Stop'
setStatusText(isStop ? 'stopped' : 'running face recognition:')
if (!isStop) {
runFaceRecognition()
}
}
function setStatusText(text) {
$('#status').val(text)
}
function displayTimeStats(timeInMs) {
$('#time').val(`${timeInMs} ms`)
$('#fps').val(`${faceapi.round(1000 / timeInMs)}`)
}
function displayImage(src) {
getImg().src = src
}
async function runFaceRecognition() {
async function next() {
const input = await faceapi.fetchImage(getFaceImageUri(classes[currClassIdx], currImageIdx))
const imgEl = $('#face').get(0)
imgEl.src = input.src
const ts = Date.now()
const descriptor = await faceapi.computeFaceDescriptor(input)
displayTimeStats(Date.now() - ts)
const bestMatch = faceMatcher.findBestMatch(descriptor)
$('#prediction').val(bestMatch.toString())
currImageIdx = currClassIdx === (classes.length - 1)
? currImageIdx + 1
: currImageIdx
currClassIdx = (currClassIdx + 1) % classes.length
currImageIdx = (currImageIdx % 6) || 2
to = setTimeout(next, interval)
}
await next(0, 0)
}
async function run() {
try {
setStatusText('loading model file...')
await faceapi.loadFaceRecognitionModel('/')
setStatusText('computing initial descriptors...')
faceMatcher = await createBbtFaceMatcher(1)
$('#loader').hide()
runFaceRecognition()
} catch (err) {
console.error(err)
}
}
$(document).ready(function() {
renderNavBar('#navbar', 'bbt_face_matching')
run()
})
</script>
</body>
</html>
\ No newline at end of file
......@@ -2,7 +2,11 @@
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<script src="js/commons.js"></script>
<script src="js/drawing.js"></script>
<script src="js/faceDetectionControls.js"></script>
<script src="js/imageSelectionControls.js"></script>
<script src="js/bbt.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
......@@ -11,6 +15,7 @@
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
......@@ -18,7 +23,22 @@
<img id="inputImg" src="" style="max-width: 800px;" />
<canvas id="overlay" />
</div>
<div class="row side-by-side">
<!-- face_detector_selection_control -->
<div id="face_detector_selection_control" class="row input-field" style="margin-right: 20px;">
<select id="selectFaceDetector">
<option value="ssd_mobilenetv1">SSD Mobilenet V1</option>
<option value="tiny_face_detector">Tiny Face Detector</option>
<option value="mtcnn">MTCNN</option>
</select>
<label>Select Face Detector</label>
</div>
<!-- face_detector_selection_control -->
<!-- image_selection_control -->
<div id="image_selection_control"></div>
<div id="selectList"></div>
<div class="row">
<label for="imgUrlInput">Get image from URL:</label>
......@@ -30,166 +50,145 @@
>
Ok
</button>
<div id="image_selection_control"></div>
<!-- image_selection_control -->
</div>
<div class="row">
<!-- ssd_mobilenetv1_controls -->
<span id="ssd_mobilenetv1_controls">
<div class="row side-by-side">
<div class="row">
<label for="minFaceSize">Minimum Face Size:</label>
<input disabled value="40" id="minFaceSize" type="text" class="bold">
<label for="minConfidence">Min Confidence:</label>
<input disabled value="0.5" id="minConfidence" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseMinFaceSize()"
onclick="onDecreaseMinConfidence()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseMinFaceSize()"
onclick="onIncreaseMinConfidence()"
>
<i class="material-icons left">+</i>
</button>
</div>
</span>
<!-- ssd_mobilenetv1_controls -->
<!-- tiny_face_detector_controls -->
<span id="tiny_face_detector_controls">
<div class="row side-by-side">
<div class="row input-field" style="margin-right: 20px;">
<select id="inputSize">
<option value="" disabled selected>Input Size:</option>
<option value="160">160 x 160</option>
<option value="224">224 x 224</option>
<option value="320">320 x 320</option>
<option value="416">416 x 416</option>
<option value="512">512 x 512</option>
<option value="608">608 x 608</option>
</select>
<label>Input Size</label>
</div>
<div class="row">
<label for="minConfidence">Min Confidence:</label>
<input disabled value="0.7" id="minConfidence" type="text" class="bold">
<label for="scoreThreshold">Score Threshold:</label>
<input disabled value="0.5" id="scoreThreshold" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn button-sm"
onclick="onDecreaseMinConfidence()"
class="waves-effect waves-light btn"
onclick="onDecreaseScoreThreshold()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn button-sm"
onclick="onIncreaseMinConfidence()"
class="waves-effect waves-light btn"
onclick="onIncreaseScoreThreshold()"
>
<i class="material-icons left">+</i>
</button>
</div>
</span>
<!-- tiny_face_detector_controls -->
<!-- mtcnn_controls -->
<span id="mtcnn_controls">
<div class="row side-by-side">
<div class="row">
<label for="maxDistance">Max Descriptor Distance:</label>
<input disabled value="0.6" id="maxDistance" type="text" class="bold">
<label for="minFaceSize">Minimum Face Size:</label>
<input disabled value="20" id="minFaceSize" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn button-sm"
onclick="onDecreaseMaxDistance()"
class="waves-effect waves-light btn"
onclick="onDecreaseMinFaceSize()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn button-sm"
onclick="onIncreaseMaxDistance()"
class="waves-effect waves-light btn"
onclick="onIncreaseMinFaceSize()"
>
<i class="material-icons left">+</i>
</button>
</div>
</div>
</div>
</span>
<!-- mtcnn_controls -->
<script>
let maxDistance = 0.6
let minConfidence = 0.7
let minFaceSize = 40
let trainDescriptorsByClass = []
function onIncreaseMinFaceSize() {
minFaceSize = Math.min(faceapi.round(minFaceSize + 20), 200)
$('#minFaceSize').val(minFaceSize)
}
</body>
function onDecreaseMinFaceSize() {
minFaceSize = Math.max(faceapi.round(minFaceSize - 20), 20)
$('#minFaceSize').val(minFaceSize)
}
function onIncreaseMinConfidence() {
minConfidence = Math.min(faceapi.round(minConfidence + 0.1), 1.0)
$('#minConfidence').val(minConfidence)
updateResults()
}
<script>
let faceMatcher = null
function onDecreaseMinConfidence() {
minConfidence = Math.max(faceapi.round(minConfidence - 0.1), 0.1)
$('#minConfidence').val(minConfidence)
updateResults()
async function updateResults() {
if (!isFaceDetectionModelLoaded()) {
return
}
function onIncreaseMaxDistance() {
maxDistance = Math.min(faceapi.round(maxDistance + 0.1), 1.0)
$('#maxDistance').val(maxDistance)
updateResults()
}
const inputImgEl = $('#inputImg').get(0)
function onDecreaseMaxDistance() {
maxDistance = Math.max(faceapi.round(maxDistance - 0.1), 0.1)
$('#maxDistance').val(maxDistance)
updateResults()
}
const options = getFaceDetectorOptions()
const results = await faceapi
.detectAllFaces(inputImgEl, options)
.withFaceLandmarks()
.withFaceDescriptors()
async function loadImageFromUrl(url) {
const img = await requestExternalImage($('#imgUrlInput').val())
$('#inputImg').get(0).src = img.src
updateResults()
drawFaceRecognitionResults(results)
}
async function updateResults() {
const inputImgEl = $('#inputImg').get(0)
const { width, height } = inputImgEl
function drawFaceRecognitionResults(results) {
const canvas = $('#overlay').get(0)
canvas.width = width
canvas.height = height
const mtcnnParams = {
minFaceSize
}
const fullFaceDescriptions = (await faceapi.allFacesMtcnn(inputImgEl, mtcnnParams))
.map(fd => fd.forSize(width, height))
fullFaceDescriptions.forEach(({ detection, landmarks, descriptor }) => {
faceapi.drawDetection('overlay', [detection], { withScore: false })
faceapi.drawLandmarks('overlay', landmarks, { lineWidth: 4, color: 'red' })
const bestMatch = getBestMatch(trainDescriptorsByClass, descriptor)
const text = `${bestMatch.distance < maxDistance ? bestMatch.className : 'unkown'} (${bestMatch.distance})`
const { x, y, height: boxHeight } = detection.getBox()
faceapi.drawText(
canvas.getContext('2d'),
x,
y + boxHeight,
text,
Object.assign(faceapi.getDefaultDrawOptions(), { color: 'red', fontSize: 16 })
// resize detection and landmarks in case displayed image is smaller than
// original size
resizedResults = resizeCanvasAndResults($('#inputImg').get(0), canvas, results)
const boxesWithText = resizedResults.map(({ detection, descriptor }) =>
new faceapi.BoxWithText(
detection.box,
faceMatcher.findBestMatch(descriptor).toString()
)
})
}
async function onSelectionChanged(uri) {
const imgBuf = await fetchImage(uri)
$(`#inputImg`).get(0).src = (await faceapi.bufferToImage(imgBuf)).src
updateResults()
)
faceapi.drawDetection(canvas, boxesWithText)
}
async function run() {
await faceapi.loadMtcnnModel('/')
// load face detection, face landmark model and face recognition models
await changeFaceDetector(selectedFaceDetector)
await faceapi.loadFaceLandmarkModel('/')
await faceapi.loadFaceRecognitionModel('/')
trainDescriptorsByClass = await initTrainDescriptorsByClass(faceapi.recognitionNet, 1)
// initialize face matcher with 1 reference descriptor per bbt character
faceMatcher = await createBbtFaceMatcher(1)
$('#loader').hide()
onSelectionChanged($('#selectList select').val())
// start processing image
updateResults()
}
$(document).ready(function() {
renderNavBar('#navbar', 'mtcnn_face_recognition')
renderImageSelectList(
'#selectList',
async (uri) => {
await onSelectionChanged(uri)
},
'bbt1.jpg'
)
renderNavBar('#navbar', 'bbt_face_recognition')
initImageSelectionControls()
initFaceDetectionControls()
run()
})
</script>
......
......@@ -2,7 +2,8 @@
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<script src="js/commons.js"></script>
<script src="js/bbt.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
......@@ -52,8 +53,7 @@
}
async function onSelectionChanged(which, uri) {
const imgBuf = await fetchImage(uri)
const input = await faceapi.bufferToImage(imgBuf)
const input = await faceapi.fetchImage(uri)
const imgEl = $(`#face${which}`).get(0)
imgEl.src = input.src
descriptors[`desc${which}`] = await faceapi.computeFaceDescriptor(input)
......@@ -68,7 +68,7 @@
}
$(document).ready(function() {
renderNavBar('#navbar', 'face_similarity')
renderNavBar('#navbar', 'bbt_face_similarity')
renderFaceImageSelectList(
'#selectList1',
async (uri) => {
......
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<img id="inputImg" src="" style="max-width: 800px;" />
<canvas id="overlay" />
</div>
<div id="facesContainer"></div>
<div class="row side-by-side">
<div id="selectList"></div>
<div class="row">
<label for="imgUrlInput">Get image from URL:</label>
<input id="imgUrlInput" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="loadImageFromUrl()"
>
Ok
</button>
</div>
<div class="row side-by-side">
<div class="row">
<label for="minConfidence">Min Confidence:</label>
<input disabled value="0.7" id="minConfidence" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseThreshold()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseThreshold()"
>
<i class="material-icons left">+</i>
</button>
</div>
</div>
<script>
let minConfidence = 0.7
function onIncreaseThreshold() {
minConfidence = Math.min(faceapi.round(minConfidence + 0.1), 1.0)
$('#minConfidence').val(minConfidence)
updateResults()
}
function onDecreaseThreshold() {
minConfidence = Math.max(faceapi.round(minConfidence - 0.1), 0.1)
$('#minConfidence').val(minConfidence)
updateResults()
}
async function loadImageFromUrl(url) {
const img = await requestExternalImage($('#imgUrlInput').val())
$('#inputImg').get(0).src = img.src
updateResults()
}
async function updateResults() {
const inputImgEl = $('#inputImg').get(0)
const { width, height } = inputImgEl
const canvas = $('#overlay').get(0)
canvas.width = width
canvas.height = height
const input = await faceapi.toNetInput(inputImgEl)
const detections = await faceapi.locateFaces(input, minConfidence)
faceapi.drawDetection('overlay', detections.map(det => det.forSize(width, height)))
const faceImages = await faceapi.extractFaces(inputImgEl, detections)
$('#facesContainer').empty()
faceImages.forEach(canvas => $('#facesContainer').append(canvas))
}
async function onSelectionChanged(uri) {
const imgBuf = await fetchImage(uri)
$(`#inputImg`).get(0).src = (await faceapi.bufferToImage(imgBuf)).src
updateResults()
}
async function run() {
await faceapi.loadFaceDetectionModel('/')
$('#loader').hide()
onSelectionChanged($('#selectList select').val())
}
$(document).ready(function() {
renderNavBar('#navbar', 'detect_and_draw_faces')
renderImageSelectList(
'#selectList',
async (uri) => {
await onSelectionChanged(uri)
},
'bbt1.jpg'
)
run()
})
</script>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<img id="inputImg" src="" style="max-width: 800px;" />
<canvas id="overlay" />
</div>
<div class="row side-by-side">
<div id="selectList"></div>
<div class="row">
<label for="imgUrlInput">Get image from URL:</label>
<input id="imgUrlInput" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="loadImageFromUrl()"
>
Ok
</button>
</div>
<div class="row side-by-side">
<div class="row">
<label for="minConfidence">Min Confidence:</label>
<input disabled value="0.7" id="minConfidence" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseMinConfidence()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseMinConfidence()"
>
<i class="material-icons left">+</i>
</button>
</div>
</div>
<script>
let minConfidence = 0.7
let drawLines = true
function onIncreaseMinConfidence() {
minConfidence = Math.min(faceapi.round(minConfidence + 0.1), 1.0)
$('#minConfidence').val(minConfidence)
updateResults()
}
function onDecreaseMinConfidence() {
minConfidence = Math.max(faceapi.round(minConfidence - 0.1), 0.1)
$('#minConfidence').val(minConfidence)
updateResults()
}
async function onSelectionChanged(uri) {
const imgBuf = await fetchImage(uri)
$(`#inputImg`).get(0).src = (await faceapi.bufferToImage(imgBuf)).src
updateResults()
}
async function loadImageFromUrl(url) {
const img = await requestExternalImage($('#imgUrlInput').val())
$('#inputImg').get(0).src = img.src
updateResults()
}
async function updateResults() {
const inputImgEl = $('#inputImg').get(0)
const { width, height } = inputImgEl
const canvas = $('#overlay').get(0)
canvas.width = width
canvas.height = height
const input = await faceapi.toNetInput(inputImgEl)
const locations = await faceapi.locateFaces(input, minConfidence)
const faces = await faceapi.extractFaces(input, locations)
let landmarksByFace = await Promise.all(faces.map(face => faceapi.detectLandmarks(face)))
// shift and scale the face landmarks to the face image position in the canvas
landmarksByFace = landmarksByFace.map((landmarks, i) => {
const box = locations[i].forSize(width, height).getBox()
return landmarks.forSize(box.width, box.height).shift(box.x, box.y)
})
faceapi.drawLandmarks(canvas, landmarksByFace, { lineWidth: drawLines ? 2 : 4, drawLines, color: 'red' })
faceapi.drawDetection('overlay', locations.map(det => det.forSize(width, height)))
}
async function run() {
await faceapi.loadFaceDetectionModel('/')
await faceapi.loadFaceLandmarkModel('/')
$('#loader').hide()
onSelectionChanged($('#selectList select').val())
}
$(document).ready(function() {
renderNavBar('#navbar', 'detect_and_draw_landmarks')
renderImageSelectList(
'#selectList',
async (uri) => {
await onSelectionChanged(uri)
},
'bbt1.jpg'
)
run()
})
</script>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<img id="inputImg" src="" style="max-width: 800px;" />
<canvas id="overlay" />
</div>
<div class="row side-by-side">
<div id="selectList"></div>
<div class="row">
<label for="imgUrlInput">Get image from URL:</label>
<input id="imgUrlInput" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="loadImageFromUrl()"
>
Ok
</button>
<p>
<input type="checkbox" id="useBatchProcessing" onchange="onChangeUseBatchProcessing(event)" />
<label for="useBatchProcessing">Use Batch Processing</label>
</p>
</div>
<div class="row side-by-side">
<div class="row">
<label for="minConfidence">Min Confidence:</label>
<input disabled value="0.7" id="minConfidence" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn button-sm"
onclick="onDecreaseMinConfidence()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn button-sm"
onclick="onIncreaseMinConfidence()"
>
<i class="material-icons left">+</i>
</button>
<div class="row">
<label for="maxDistance">Max Descriptor Distance:</label>
<input disabled value="0.6" id="maxDistance" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn button-sm"
onclick="onDecreaseMaxDistance()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn button-sm"
onclick="onIncreaseMaxDistance()"
>
<i class="material-icons left">+</i>
</button>
</div>
</div>
<script>
let maxDistance = 0.6
let minConfidence = 0.7
let useBatchProcessing = false
let detectionNet, recognitionNet, landmarkNet
let trainDescriptorsByClass = []
function onChangeUseBatchProcessing(e) {
useBatchProcessing = $(e.target).prop('checked')
}
function onIncreaseMinConfidence() {
minConfidence = Math.min(faceapi.round(minConfidence + 0.1), 1.0)
$('#minConfidence').val(minConfidence)
updateResults()
}
function onDecreaseMinConfidence() {
minConfidence = Math.max(faceapi.round(minConfidence - 0.1), 0.1)
$('#minConfidence').val(minConfidence)
updateResults()
}
function onIncreaseMaxDistance() {
maxDistance = Math.min(faceapi.round(maxDistance + 0.1), 1.0)
$('#maxDistance').val(maxDistance)
updateResults()
}
function onDecreaseMaxDistance() {
maxDistance = Math.max(faceapi.round(maxDistance - 0.1), 0.1)
$('#maxDistance').val(maxDistance)
updateResults()
}
async function loadImageFromUrl(url) {
const img = await requestExternalImage($('#imgUrlInput').val())
$('#inputImg').get(0).src = img.src
updateResults()
}
async function updateResults() {
const inputImgEl = $('#inputImg').get(0)
const { width, height } = inputImgEl
const canvas = $('#overlay').get(0)
canvas.width = width
canvas.height = height
const fullFaceDescriptions = (await faceapi.allFaces(inputImgEl, minConfidence, useBatchProcessing))
.map(fd => fd.forSize(width, height))
fullFaceDescriptions.forEach(({ detection, descriptor }) => {
faceapi.drawDetection('overlay', [detection], { withScore: false })
const bestMatch = getBestMatch(trainDescriptorsByClass, descriptor)
const text = `${bestMatch.distance < maxDistance ? bestMatch.className : 'unkown'} (${bestMatch.distance})`
const { x, y, height: boxHeight } = detection.getBox()
faceapi.drawText(
canvas.getContext('2d'),
x,
y + boxHeight,
text,
Object.assign(faceapi.getDefaultDrawOptions(), { color: 'red', fontSize: 16 })
)
})
}
async function onSelectionChanged(uri) {
const imgBuf = await fetchImage(uri)
$(`#inputImg`).get(0).src = (await faceapi.bufferToImage(imgBuf)).src
updateResults()
}
async function run() {
await faceapi.loadModels('/')
trainDescriptorsByClass = await initTrainDescriptorsByClass(faceapi.recognitionNet, 1)
$('#loader').hide()
onSelectionChanged($('#selectList select').val())
}
$(document).ready(function() {
renderNavBar('#navbar', 'detect_and_recognize_faces')
renderImageSelectList(
'#selectList',
async (uri) => {
await onSelectionChanged(uri)
},
'bbt1.jpg'
)
run()
})
</script>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<img id="inputImg" src="" style="max-width: 800px;" />
<canvas id="overlay" />
</div>
<div id="facesContainer"></div>
<div class="row side-by-side">
<div id="selectList"></div>
<div class="row">
<label for="imgUrlInput">Get image from URL:</label>
<input id="imgUrlInput" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="loadImageFromUrl()"
>
Ok
</button>
</div>
<div class="row side-by-side">
<div class="row">
<label for="minConfidence">Min Confidence:</label>
<input disabled value="0.7" id="minConfidence" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseMinConfidence()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseMinConfidence()"
>
<i class="material-icons left">+</i>
</button>
</div>
<div class="row">
<p>
<input type="checkbox" id="drawLinesCheckbox" onchange="onChangeUseMtcnn(event)" />
<label for="drawLinesCheckbox">Use Mtcnn</label>
</p>
</div>
</div>
<script>
let minConfidence = 0.7
let useMtcnn = false
function onIncreaseMinConfidence() {
minConfidence = Math.min(faceapi.round(minConfidence + 0.1), 1.0)
$('#minConfidence').val(minConfidence)
updateResults()
}
function onDecreaseMinConfidence() {
minConfidence = Math.max(faceapi.round(minConfidence - 0.1), 0.1)
$('#minConfidence').val(minConfidence)
updateResults()
}
function onChangeUseMtcnn(e) {
useMtcnn = $(e.target).prop('checked')
updateResults()
}
async function loadImageFromUrl(url) {
const img = await requestExternalImage($('#imgUrlInput').val())
$('#inputImg').get(0).src = img.src
updateResults()
}
async function locateAndAlignFacesWithMtcnn(inputImgEl) {
const input = await faceapi.toNetInput(inputImgEl)
const results = await faceapi.mtcnn(input, { minFaceSize: 100 })
const unalignedFaceImages = await faceapi.extractFaces(input.getInput(0), results.map(res => res.faceDetection))
const alignedFaceBoxes = results
.filter(res => res.faceDetection.score > minConfidence)
.map(res => res.faceLandmarks.align())
const alignedFaceImages = await faceapi.extractFaces(input.getInput(0), alignedFaceBoxes)
return {
unalignedFaceImages,
alignedFaceImages
}
}
async function locateAndAlignFacesWithSSD(inputImgEl) {
const input = await faceapi.toNetInput(inputImgEl)
const locations = await faceapi.locateFaces(input, minConfidence)
const unalignedFaceImages = await faceapi.extractFaces(input.getInput(0), locations)
// detect landmarks and get the aligned face image bounding boxes
const alignedFaceBoxes = await Promise.all(unalignedFaceImages.map(
async (faceCanvas, i) => {
const faceLandmarks = await faceapi.detectLandmarks(faceCanvas)
return faceLandmarks.align(locations[i])
}
))
const alignedFaceImages = await faceapi.extractFaces(input.getInput(0), alignedFaceBoxes)
return {
unalignedFaceImages,
alignedFaceImages
}
}
async function updateResults() {
const inputImgEl = $('#inputImg').get(0)
const { width, height } = inputImgEl
const canvas = $('#overlay').get(0)
canvas.width = width
canvas.height = height
const {
unalignedFaceImages,
alignedFaceImages
} = useMtcnn
? await locateAndAlignFacesWithMtcnn(inputImgEl)
: await locateAndAlignFacesWithSSD(inputImgEl)
$('#facesContainer').empty()
unalignedFaceImages.forEach(async (faceCanvas, i) => {
$('#facesContainer').append(faceCanvas)
$('#facesContainer').append(alignedFaceImages[i])
})
}
async function onSelectionChanged(uri) {
const imgBuf = await fetchImage(uri)
$(`#inputImg`).get(0).src = (await faceapi.bufferToImage(imgBuf)).src
updateResults()
}
async function run() {
await faceapi.loadFaceDetectionModel('/')
await faceapi.loadFaceLandmarkModel('/')
await faceapi.loadMtcnnModel('/')
$('#loader').hide()
onSelectionChanged($('#selectList select').val())
}
$(document).ready(function() {
renderNavBar('#navbar', 'face_alignment')
renderImageSelectList(
'#selectList',
async (uri) => {
await onSelectionChanged(uri)
},
'bbt1.jpg'
)
run()
})
</script>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="js/commons.js"></script>
<script src="js/drawing.js"></script>
<script src="js/faceDetectionControls.js"></script>
<script src="js/imageSelectionControls.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<img id="inputImg" src="" style="max-width: 800px;" />
<canvas id="overlay" />
</div>
<div class="row side-by-side">
<!-- image_selection_control -->
<div id="selectList"></div>
<div class="row">
<label for="imgUrlInput">Get image from URL:</label>
<input id="imgUrlInput" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="loadImageFromUrl()"
>
Ok
</button>
<!-- image_selection_control -->
</div>
<div class="row side-by-side">
<!-- face_detector_selection_control -->
<div id="face_detector_selection_control" class="row input-field" style="margin-right: 20px;">
<select id="selectFaceDetector">
<option value="ssd_mobilenetv1">SSD Mobilenet V1</option>
<option value="tiny_face_detector">Tiny Face Detector</option>
<option value="mtcnn">MTCNN</option>
</select>
<label>Select Face Detector</label>
</div>
<!-- face_detector_selection_control -->
<!-- check boxes -->
<div class="row" style="width: 220px;">
<input type="checkbox" id="withFaceLandmarksCheckbox" onchange="onChangeWithFaceLandmarks(event)" />
<label for="withFaceLandmarksCheckbox">Detect Face Landmarks</label>
<input type="checkbox" id="hideBoundingBoxesCheckbox" onchange="onChangeHideBoundingBoxes(event)" />
<label for="hideBoundingBoxesCheckbox">Hide Bounding Boxes</label>
</div>
<!-- check boxes -->
</div>
<!-- ssd_mobilenetv1_controls -->
<span id="ssd_mobilenetv1_controls">
<div class="row side-by-side">
<div class="row">
<label for="minConfidence">Min Confidence:</label>
<input disabled value="0.5" id="minConfidence" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseMinConfidence()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseMinConfidence()"
>
<i class="material-icons left">+</i>
</button>
</div>
</span>
<!-- ssd_mobilenetv1_controls -->
<!-- tiny_face_detector_controls -->
<span id="tiny_face_detector_controls">
<div class="row side-by-side">
<div class="row input-field" style="margin-right: 20px;">
<select id="inputSize">
<option value="" disabled selected>Input Size:</option>
<option value="160">160 x 160</option>
<option value="224">224 x 224</option>
<option value="320">320 x 320</option>
<option value="416">416 x 416</option>
<option value="512">512 x 512</option>
<option value="608">608 x 608</option>
</select>
<label>Input Size</label>
</div>
<div class="row">
<label for="scoreThreshold">Score Threshold:</label>
<input disabled value="0.5" id="scoreThreshold" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseScoreThreshold()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseScoreThreshold()"
>
<i class="material-icons left">+</i>
</button>
</div>
</span>
<!-- tiny_face_detector_controls -->
<!-- mtcnn_controls -->
<span id="mtcnn_controls">
<div class="row side-by-side">
<div class="row">
<label for="minFaceSize">Minimum Face Size:</label>
<input disabled value="20" id="minFaceSize" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseMinFaceSize()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseMinFaceSize()"
>
<i class="material-icons left">+</i>
</button>
</div>
</span>
<!-- mtcnn_controls -->
</body>
<script>
let withFaceLandmarks = false
let withBoxes = true
function onChangeWithFaceLandmarks(e) {
withFaceLandmarks = $(e.target).prop('checked')
updateResults()
}
function onChangeHideBoundingBoxes(e) {
withBoxes = !$(e.target).prop('checked')
updateResults()
}
async function updateResults() {
if (!isFaceDetectionModelLoaded()) {
return
}
const inputImgEl = $('#inputImg').get(0)
const options = getFaceDetectorOptions()
const faceDetectionTask = faceapi.detectAllFaces(inputImgEl, options)
const results = withFaceLandmarks
? await faceDetectionTask.withFaceLandmarks()
: await faceDetectionTask
const drawFunction = withFaceLandmarks
? drawLandmarks
: drawDetections
drawFunction(inputImgEl, $('#overlay').get(0), results, withBoxes)
}
async function run() {
// load face detection and face landmark models
await changeFaceDetector(SSD_MOBILENETV1)
await faceapi.loadFaceLandmarkModel('/')
// start processing image
updateResults()
}
$(document).ready(function() {
renderNavBar('#navbar', 'face_and_landmark_detection')
initImageSelectionControls()
initFaceDetectionControls()
run()
})
</script>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<img id="inputImg" src="" style="max-width: 800px;" />
<canvas id="overlay" />
</div>
<div class="row side-by-side">
<div id="selectList"></div>
<div class="row">
<label for="imgUrlInput">Get image from URL:</label>
<input id="imgUrlInput" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="loadImageFromUrl()"
>
Ok
</button>
</div>
<div class="row side-by-side">
<div class="row">
<label for="minConfidence">Min Confidence:</label>
<input disabled value="0.7" id="minConfidence" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseThreshold()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseThreshold()"
>
<i class="material-icons left">+</i>
</button>
</div>
</div>
<script>
let minConfidence = 0.7
let result
function onIncreaseThreshold() {
minConfidence = Math.min(faceapi.round(minConfidence + 0.1), 1.0)
$('#minConfidence').val(minConfidence)
updateResults()
}
function onDecreaseThreshold() {
minConfidence = Math.max(faceapi.round(minConfidence - 0.1), 0.1)
$('#minConfidence').val(minConfidence)
updateResults()
}
async function loadImageFromUrl(url) {
const img = await requestExternalImage($('#imgUrlInput').val())
$('#inputImg').get(0).src = img.src
updateResults()
}
async function updateResults() {
const inputImgEl = $('#inputImg').get(0)
const { width, height } = inputImgEl
const canvas = $('#overlay').get(0)
canvas.width = width
canvas.height = height
result = await faceapi.locateFaces(inputImgEl, minConfidence)
faceapi.drawDetection('overlay', result.map(det => det.forSize(width, height)))
}
async function onSelectionChanged(uri) {
const imgBuf = await fetchImage(uri)
$(`#inputImg`).get(0).src = (await faceapi.bufferToImage(imgBuf)).src
updateResults()
}
async function run() {
await faceapi.loadFaceDetectionModel('/')
$('#loader').hide()
onSelectionChanged($('#selectList select').val())
}
$(document).ready(function() {
renderNavBar('#navbar', 'face_detection')
renderImageSelectList(
'#selectList',
async (uri) => {
await onSelectionChanged(uri)
},
'bbt1.jpg'
)
run()
})
</script>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<video src="media/bbt.mp4" onplay="onPlay(this)" id="inputVideo" autoplay muted></video>
<canvas id="overlay" />
</div>
<div class="row side-by-side">
<div class="row">
<label for="minConfidence">Min Confidence:</label>
<input disabled value="0.7" id="minConfidence" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseThreshold()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseThreshold()"
>
<i class="material-icons left">+</i>
</button>
</div>
<div class="row side-by-side">
<div class="row">
<label for="time">Time:</label>
<input disabled value="-" id="time" type="text" class="bold">
</div>
<div class="row">
<label for="fps">Estimated Fps:</label>
<input disabled value="-" id="fps" type="text" class="bold">
</div>
</div>
</div>
<script>
let minConfidence = 0.7
let modelLoaded = false
let result
let forwardTimes = []
function updateTimeStats(timeInMs) {
forwardTimes = [timeInMs].concat(forwardTimes).slice(0, 30)
const avgTimeInMs = forwardTimes.reduce((total, t) => total + t) / forwardTimes.length
$('#time').val(`${Math.round(avgTimeInMs)} ms`)
$('#fps').val(`${faceapi.round(1000 / avgTimeInMs)}`)
}
function onIncreaseThreshold() {
minConfidence = Math.min(faceapi.round(minConfidence + 0.1), 1.0)
$('#minConfidence').val(minConfidence)
}
function onDecreaseThreshold() {
minConfidence = Math.max(faceapi.round(minConfidence - 0.1), 0.1)
$('#minConfidence').val(minConfidence)
}
async function onPlay(videoEl) {
if(videoEl.paused || videoEl.ended || !modelLoaded)
return false
const { width, height } = faceapi.getMediaDimensions(videoEl)
const canvas = $('#overlay').get(0)
canvas.width = width
canvas.height = height
const ts = Date.now()
result = await faceapi.locateFaces(videoEl, minConfidence)
updateTimeStats(Date.now() - ts)
faceapi.drawDetection('overlay', result.map(det => det.forSize(width, height)))
setTimeout(() => onPlay(videoEl))
}
async function run() {
await faceapi.loadFaceDetectionModel('/')
modelLoaded = true
onPlay($('#inputVideo').get(0))
$('#loader').hide()
}
$(document).ready(function() {
renderNavBar('#navbar', 'face_detection_video')
run()
})
</script>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="js/commons.js"></script>
<script src="js/faceDetectionControls.js"></script>
<script src="js/imageSelectionControls.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<img id="inputImg" src="" style="max-width: 800px;" />
<canvas id="overlay" />
</div>
<div id="facesContainer"></div>
<div class="row side-by-side">
<!-- face_detector_selection_control -->
<div id="face_detector_selection_control" class="row input-field" style="margin-right: 20px;">
<select id="selectFaceDetector">
<option value="ssd_mobilenetv1">SSD Mobilenet V1</option>
<option value="tiny_face_detector">Tiny Face Detector</option>
<option value="mtcnn">MTCNN</option>
</select>
<label>Select Face Detector</label>
</div>
<!-- face_detector_selection_control -->
<!-- image_selection_control -->
<div id="selectList"></div>
<div class="row">
<label for="imgUrlInput">Get image from URL:</label>
<input id="imgUrlInput" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="loadImageFromUrl()"
>
Ok
</button>
<!-- image_selection_control -->
</div>
<!-- ssd_mobilenetv1_controls -->
<span id="ssd_mobilenetv1_controls">
<div class="row side-by-side">
<div class="row">
<label for="minConfidence">Min Confidence:</label>
<input disabled value="0.5" id="minConfidence" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseMinConfidence()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseMinConfidence()"
>
<i class="material-icons left">+</i>
</button>
</div>
</span>
<!-- ssd_mobilenetv1_controls -->
<!-- tiny_face_detector_controls -->
<span id="tiny_face_detector_controls">
<div class="row side-by-side">
<div class="row input-field" style="margin-right: 20px;">
<select id="inputSize">
<option value="" disabled selected>Input Size:</option>
<option value="160">160 x 160</option>
<option value="224">224 x 224</option>
<option value="320">320 x 320</option>
<option value="416">416 x 416</option>
<option value="512">512 x 512</option>
<option value="608">608 x 608</option>
</select>
<label>Input Size</label>
</div>
<div class="row">
<label for="scoreThreshold">Score Threshold:</label>
<input disabled value="0.5" id="scoreThreshold" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseScoreThreshold()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseScoreThreshold()"
>
<i class="material-icons left">+</i>
</button>
</div>
</span>
<!-- tiny_face_detector_controls -->
<!-- mtcnn_controls -->
<span id="mtcnn_controls">
<div class="row side-by-side">
<div class="row">
<label for="minFaceSize">Minimum Face Size:</label>
<input disabled value="20" id="minFaceSize" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseMinFaceSize()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseMinFaceSize()"
>
<i class="material-icons left">+</i>
</button>
</div>
</span>
<!-- mtcnn_controls -->
</div>
<script>
async function updateResults() {
if (!isFaceDetectionModelLoaded()) {
return
}
const inputImgEl = $('#inputImg').get(0)
const options = getFaceDetectorOptions()
const detections = await faceapi.detectAllFaces(inputImgEl, options)
const faceImages = await faceapi.extractFaces(inputImgEl, detections)
displayExtractedFaces(faceImages)
}
function displayExtractedFaces(faceImages) {
const canvas = $('#overlay').get(0)
const { width, height } = $('#inputImg').get(0)
canvas.width = width
canvas.height = height
$('#facesContainer').empty()
faceImages.forEach(canvas => $('#facesContainer').append(canvas))
}
async function run() {
// load face detection model
await changeFaceDetector(selectedFaceDetector)
// start processing image
updateResults()
}
$(document).ready(function() {
renderNavBar('#navbar', 'face_extraction')
initImageSelectionControls()
initFaceDetectionControls()
run()
})
</script>
</body>
</html>
\ No newline at end of file
......@@ -2,7 +2,9 @@
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<script src="js/commons.js"></script>
<script src="js/drawing.js"></script>
<script src="js/faceDetectionControls.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
......@@ -11,148 +13,271 @@
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div>
<div class="row center-content" id="loader">
<input disabled value="" id="status" type="text" class="bold">
<div class="progress">
<p> Reference Image: </p>
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<img id="refImg" src="" style="max-width: 800px;" />
<canvas id="refImgOverlay" class="overlay"/>
</div>
<div class="row side-by-side">
<!-- image_selection_control -->
<div class="row">
<label>Upload Image:</label>
<div>
<input id="refImgUploadInput" type="file" class="bold" onchange="uploadRefImage()" accept=".jpg, .jpeg, .png">
</div>
<div class="row center-content">
<img id="face" src=""/>
</div>
<div class="row">
<label for="prediction">Prediction:</label>
<input disabled value="-" id="prediction" type="text" class="bold">
<label for="refImgUrlInput">Get image from URL:</label>
<input id="refImgUrlInput" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="loadRefImageFromUrl()"
>
Ok
</button>
<!-- image_selection_control -->
</div>
<p> Query Image: </p>
<div style="position: relative" class="margin">
<img id="queryImg" src="" style="max-width: 800px;" />
<canvas id="queryImgOverlay" class="overlay"/>
</div>
<div class="row side-by-side">
<!-- image_selection_control -->
<div class="row">
<label for="time">Time:</label>
<input disabled value="-" id="time" type="text" class="bold">
<label>Upload Image:</label>
<div>
<input id="queryImgUploadInput" type="file" class="bold" onchange="uploadQueryImage()" accept=".jpg, .jpeg, .png">
</div>
</div>
<div class="row">
<label for="fps">Estimated Fps:</label>
<input disabled value="-" id="fps" type="text" class="bold">
<label for="queryImgUrlInput">Get image from URL:</label>
<input id="queryImgUrlInput" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="loadQueryImageFromUrl()"
>
Ok
</button>
<!-- image_selection_control -->
</div>
<div class="center-content">
<!-- face_detector_selection_control -->
<div id="face_detector_selection_control" class="row input-field">
<select id="selectFaceDetector">
<option value="ssd_mobilenetv1">SSD Mobilenet V1</option>
<option value="tiny_face_detector">Tiny Face Detector</option>
<option value="mtcnn">MTCNN</option>
</select>
<label>Select Face Detector</label>
</div>
<!-- face_detector_selection_control -->
<!-- ssd_mobilenetv1_controls -->
<span id="ssd_mobilenetv1_controls">
<div class="row side-by-side">
<div class="row">
<label for="minConfidence">Min Confidence:</label>
<input disabled value="0.5" id="minConfidence" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
id="stop"
onclick="onToggleStop()"
onclick="onDecreaseMinConfidence()"
>
Stop
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onSlower()"
onclick="onIncreaseMinConfidence()"
>
<i class="material-icons left">-</i> Slower
<i class="material-icons left">+</i>
</button>
</div>
</span>
<!-- ssd_mobilenetv1_controls -->
<!-- tiny_face_detector_controls -->
<span id="tiny_face_detector_controls">
<div class="row side-by-side">
<div class="row input-field" style="margin-right: 20px;">
<select id="inputSize">
<option value="" disabled selected>Input Size:</option>
<option value="160">160 x 160</option>
<option value="224">224 x 224</option>
<option value="320">320 x 320</option>
<option value="416">416 x 416</option>
<option value="512">512 x 512</option>
<option value="608">608 x 608</option>
</select>
<label>Input Size</label>
</div>
<div class="row">
<label for="scoreThreshold">Score Threshold:</label>
<input disabled value="0.5" id="scoreThreshold" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onFaster()"
onclick="onDecreaseScoreThreshold()"
>
<i class="material-icons left">+</i> Faster
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseScoreThreshold()"
>
<i class="material-icons left">+</i>
</button>
</div>
</span>
<!-- tiny_face_detector_controls -->
<!-- mtcnn_controls -->
<span id="mtcnn_controls">
<div class="row side-by-side">
<div class="row">
<label for="interval">Interval:</label>
<input disabled value="2000" id="interval" type="text" class="bold">
<label for="minFaceSize">Minimum Face Size:</label>
<input disabled value="20" id="minFaceSize" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseMinFaceSize()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseMinFaceSize()"
>
<i class="material-icons left">+</i>
</button>
</div>
</span>
<!-- mtcnn_controls -->
</div>
<script>
// for 150 x 150 sized face images 0.6 is a good threshold to
// judge whether two face descriptors are similar or not
const threshold = 0.6
let interval = 2000
let isStop = false
let trainDescriptorsByClass = []
let currImageIdx = 2, currClassIdx = 0
let to = null
function onSlower() {
interval = Math.min(interval + 100, 2000)
$('#interval').val(interval)
}
</body>
function onFaster() {
interval = Math.max(interval - 100, 0)
$('#interval').val(interval)
}
<script>
let faceMatcher = null
function onToggleStop() {
clearTimeout(to)
isStop = !isStop
document.getElementById('stop').innerHTML = isStop ? 'Continue' : 'Stop'
setStatusText(isStop ? 'stopped' : 'running face recognition:')
if (!isStop) {
runFaceRecognition()
}
async function uploadRefImage(e) {
const imgFile = $('#refImgUploadInput').get(0).files[0]
const img = await faceapi.bufferToImage(imgFile)
$('#refImg').get(0).src = img.src
updateReferenceImageResults()
}
function setStatusText(text) {
$('#status').val(text)
async function loadRefImageFromUrl(url) {
const img = await requestExternalImage($('#refImgUrlInput').val())
$('#refImg').get(0).src = img.src
updateReferenceImageResults()
}
function displayTimeStats(timeInMs) {
$('#time').val(`${timeInMs} ms`)
$('#fps').val(`${faceapi.round(1000 / timeInMs)}`)
async function uploadQueryImage(e) {
const imgFile = $('#queryImgUploadInput').get(0).files[0]
const img = await faceapi.bufferToImage(imgFile)
$('#queryImg').get(0).src = img.src
updateQueryImageResults()
}
function displayImage(src) {
getImg().src = src
async function loadQueryImageFromUrl(url) {
const img = await requestExternalImage($('#queryImgUrlInput').val())
$('#queryImg').get(0).src = img.src
updateQueryImageResults()
}
async function runFaceRecognition() {
async function next() {
const imgBuf = await fetchImage(getFaceImageUri(classes[currClassIdx], currImageIdx))
const input = await faceapi.bufferToImage(imgBuf)
const imgEl = $('#face').get(0)
imgEl.src = input.src
async function updateReferenceImageResults() {
const imgEl = $('#refImg').get(0)
const canvas = $('#refImgOverlay').get(0)
const ts = Date.now()
const descriptor = await faceapi.computeFaceDescriptor(input)
displayTimeStats(Date.now() - ts)
const fullFaceDescriptions = await faceapi
.detectAllFaces(imgEl, getFaceDetectorOptions())
.withFaceLandmarks()
.withFaceDescriptors()
const bestMatch = getBestMatch(trainDescriptorsByClass, descriptor)
$('#prediction').val(`${bestMatch.distance < threshold ? bestMatch.className : 'unkown'} (${bestMatch.distance})`)
if (!fullFaceDescriptions.length) {
return
}
currImageIdx = currClassIdx === (classes.length - 1)
? currImageIdx + 1
: currImageIdx
currClassIdx = (currClassIdx + 1) % classes.length
// create FaceMatcher with automatically assigned labels
// from the detection results for the reference image
faceMatcher = new faceapi.FaceMatcher(fullFaceDescriptions)
currImageIdx = (currImageIdx % 6) || 2
to = setTimeout(next, interval)
}
await next(0, 0)
// resize detection and landmarks in case displayed image is smaller than
// original size
resizedResults = resizeCanvasAndResults(imgEl, canvas, fullFaceDescriptions)
// draw boxes with the corresponding label as text
const labels = faceMatcher.labeledDescriptors
.map(ld => ld.label)
const boxesWithText = resizedResults
.map(res => res.detection.box)
.map((box, i) => new faceapi.BoxWithText(box, labels[i]))
faceapi.drawDetection(canvas, boxesWithText)
}
async function run() {
try {
setStatusText('loading model file...')
async function updateQueryImageResults() {
if (!faceMatcher) {
return
}
await faceapi.loadFaceRecognitionModel('/')
const imgEl = $('#queryImg').get(0)
const canvas = $('#queryImgOverlay').get(0)
setStatusText('computing initial descriptors...')
const results = await faceapi
.detectAllFaces(imgEl, getFaceDetectorOptions())
.withFaceLandmarks()
.withFaceDescriptors()
trainDescriptorsByClass = await initTrainDescriptorsByClass(faceapi.recognitionNet)
$('#loader').hide()
// resize detection and landmarks in case displayed image is smaller than
// original size
resizedResults = resizeCanvasAndResults(imgEl, canvas, results)
// draw boxes with the corresponding label as text
const boxesWithText = resizedResults.map(({ detection, descriptor }) =>
new faceapi.BoxWithText(
detection.box,
// match each face descriptor to the reference descriptor
// with lowest euclidean distance and display the result as text
faceMatcher.findBestMatch(descriptor).toString()
)
)
faceapi.drawDetection(canvas, boxesWithText)
}
runFaceRecognition()
} catch (err) {
console.error(err)
async function updateResults() {
await updateReferenceImageResults()
await updateQueryImageResults()
}
async function run() {
// load face detection, face landmark model and face recognition models
await changeFaceDetector(selectedFaceDetector)
await faceapi.loadFaceLandmarkModel('/')
await faceapi.loadFaceRecognitionModel('/')
}
$(document).ready(function() {
renderNavBar('#navbar', 'face_recognition')
initFaceDetectionControls()
run()
})
</script>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<img id="inputImg" src="" style="max-width: 800px;" />
<canvas id="overlay" />
</div>
<div class="row side-by-side">
<div id="selectList"></div>
<div class="row">
<label for="imgUrlInput">Get image from URL:</label>
<input id="imgUrlInput" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="loadImageFromUrl()"
>
Ok
</button>
</div>
<div class="row side-by-side">
<div class="row">
<label for="minConfidence">Min Confidence:</label>
<input disabled value="0.7" id="minConfidence" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseThreshold()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseThreshold()"
>
<i class="material-icons left">+</i>
</button>
</div>
</div>
<script>
let minFaceSize = 50
let scaleFactor = 0.709
let maxNumScales = 10
let stage1Threshold = 0.7
let stage2Threshold = 0.7
let stage3Threshold = 0.7
function onIncreaseThreshold() {
minConfidence = Math.min(faceapi.round(minConfidence + 0.1), 1.0)
$('#minConfidence').val(minConfidence)
updateResults()
}
function onDecreaseThreshold() {
minConfidence = Math.max(faceapi.round(minConfidence - 0.1), 0.1)
$('#minConfidence').val(minConfidence)
updateResults()
}
async function loadImageFromUrl(url) {
const img = await requestExternalImage($('#imgUrlInput').val())
$('#inputImg').get(0).src = img.src
updateResults()
}
async function updateResults() {
const inputImgEl = $('#inputImg').get(0)
const { width, height } = inputImgEl
const canvas = $('#overlay').get(0)
canvas.width = width
canvas.height = height
const mtcnnParams = {
minFaceSize,
scaleFactor,
maxNumScales,
scoreThresholds: [stage1Threshold, stage2Threshold, stage3Threshold]
}
const results = await faceapi.mtcnn(inputImgEl, mtcnnParams)
if (results) {
results.forEach(({ faceDetection, faceLandmarks }) => {
faceapi.drawDetection('overlay', faceDetection.forSize(width, height))
faceapi.drawLandmarks('overlay', faceLandmarks.forSize(width, height), { lineWidth: 4, color: 'red' })
})
}
}
async function onSelectionChanged(uri) {
const imgBuf = await fetchImage(uri)
$(`#inputImg`).get(0).src = (await faceapi.bufferToImage(imgBuf)).src
updateResults()
}
async function run() {
await faceapi.loadMtcnnModel('/')
$('#loader').hide()
onSelectionChanged($('#selectList select').val())
}
$(document).ready(function() {
renderNavBar('#navbar', 'mtcnn_face_detection')
renderImageSelectList(
'#selectList',
async (uri) => {
await onSelectionChanged(uri)
},
'bbt1.jpg'
)
run()
})
</script>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<video src="media/bbt.mp4" onplay="onPlay(this)" id="inputVideo" autoplay muted></video>
<canvas id="overlay" />
</div>
<div class="row side-by-side">
<div class="row">
<label for="minFaceSize">Minimum Face Size:</label>
<input disabled value="80" id="minFaceSize" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseMinFaceSize()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseMinFaceSize()"
>
<i class="material-icons left">+</i>
</button>
</div>
<div class="row side-by-side">
<div class="row">
<label for="time">Time:</label>
<input disabled value="-" id="time" type="text" class="bold">
</div>
<div class="row">
<label for="fps">Estimated Fps:</label>
<input disabled value="-" id="fps" type="text" class="bold">
</div>
</div>
</div>
<script>
let modelLoaded = false
let minFaceSize = 80
let minConfidence = 0.9
let forwardTimes = []
function onIncreaseMinFaceSize() {
minFaceSize = Math.min(faceapi.round(minFaceSize + 20), 200)
$('#minFaceSize').val(minFaceSize)
}
function onDecreaseMinFaceSize() {
minFaceSize = Math.max(faceapi.round(minFaceSize - 20), 20)
$('#minFaceSize').val(minFaceSize)
}
function updateTimeStats(timeInMs) {
forwardTimes = [timeInMs].concat(forwardTimes).slice(0, 30)
const avgTimeInMs = forwardTimes.reduce((total, t) => total + t) / forwardTimes.length
$('#time').val(`${Math.round(avgTimeInMs)} ms`)
$('#fps').val(`${faceapi.round(1000 / avgTimeInMs)}`)
}
async function onPlay(videoEl) {
if(videoEl.paused || videoEl.ended || !modelLoaded)
return false
const { width, height } = faceapi.getMediaDimensions(videoEl)
const canvas = $('#overlay').get(0)
canvas.width = width
canvas.height = height
const mtcnnParams = {
minFaceSize
}
const ts = Date.now()
const results = await faceapi.mtcnn(videoEl, mtcnnParams)
updateTimeStats(Date.now() - ts)
if (results) {
results.forEach(({ faceDetection, faceLandmarks }) => {
if (faceDetection.score < minConfidence) {
return
}
faceapi.drawDetection('overlay', faceDetection.forSize(width, height))
faceapi.drawLandmarks('overlay', faceLandmarks.forSize(width, height), { lineWidth: 4, color: 'red' })
})
}
setTimeout(() => onPlay(videoEl))
}
async function run() {
await faceapi.loadMtcnnModel('/')
modelLoaded = true
onPlay($('#inputVideo').get(0))
$('#loader').hide()
}
$(document).ready(function() {
renderNavBar('#navbar', 'mtcnn_face_detection_video')
run()
})
</script>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<video onplay="onPlay(this)" id="inputVideo" autoplay muted></video>
<canvas id="overlay" />
</div>
<div class="row side-by-side">
<div class="row">
<label for="minFaceSize">Minimum Face Size:</label>
<input disabled value="200" id="minFaceSize" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseMinFaceSize()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseMinFaceSize()"
>
<i class="material-icons left">+</i>
</button>
</div>
<div class="row side-by-side">
<div class="row">
<label for="time">Time:</label>
<input disabled value="-" id="time" type="text" class="bold">
</div>
<div class="row">
<label for="fps">Estimated Fps:</label>
<input disabled value="-" id="fps" type="text" class="bold">
</div>
</div>
</div>
<script>
let modelLoaded = false
let minFaceSize = 200
let minConfidence = 0.9
let forwardTimes = []
function onIncreaseMinFaceSize() {
minFaceSize = Math.min(faceapi.round(minFaceSize + 50), 300)
$('#minFaceSize').val(minFaceSize)
}
function onDecreaseMinFaceSize() {
minFaceSize = Math.max(faceapi.round(minFaceSize - 50), 50)
$('#minFaceSize').val(minFaceSize)
}
function updateTimeStats(timeInMs) {
forwardTimes = [timeInMs].concat(forwardTimes).slice(0, 30)
const avgTimeInMs = forwardTimes.reduce((total, t) => total + t) / forwardTimes.length
$('#time').val(`${Math.round(avgTimeInMs)} ms`)
$('#fps').val(`${faceapi.round(1000 / avgTimeInMs)}`)
}
async function onPlay(videoEl) {
if(videoEl.paused || videoEl.ended || !modelLoaded)
return false
const { width, height } = faceapi.getMediaDimensions(videoEl)
const canvas = $('#overlay').get(0)
canvas.width = width
canvas.height = height
const mtcnnParams = {
minFaceSize
}
const { results, stats } = await faceapi.nets.mtcnn.forwardWithStats(videoEl, mtcnnParams)
updateTimeStats(stats.total)
if (results) {
results.forEach(({ faceDetection, faceLandmarks }) => {
if (faceDetection.score < minConfidence) {
return
}
faceapi.drawDetection('overlay', faceDetection.forSize(width, height))
faceapi.drawLandmarks('overlay', faceLandmarks.forSize(width, height), { lineWidth: 4, color: 'red' })
})
}
setTimeout(() => onPlay(videoEl))
}
async function run() {
await faceapi.loadMtcnnModel('/')
modelLoaded = true
const videoEl = $('#inputVideo').get(0)
navigator.getUserMedia(
{ video: {} },
stream => videoEl.srcObject = stream,
err => console.error(err)
)
$('#loader').hide()
}
$(document).ready(function() {
renderNavBar('#navbar', 'mtcnn_face_detection_webcam')
run()
})
</script>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<video onplay="onPlay(this)" id="inputVideo" autoplay muted></video>
<canvas id="overlay" />
</div>
<div class="row side-by-side">
<div class="row">
<label for="minFaceSize">Minimum Face Size:</label>
<input disabled value="200" id="minFaceSize" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseMinFaceSize()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseMinFaceSize()"
>
<i class="material-icons left">+</i>
</button>
</div>
<div class="row side-by-side">
<div class="row">
<label for="time">Time:</label>
<input disabled value="-" id="time" type="text" class="bold">
</div>
<div class="row">
<label for="fps">Estimated Fps:</label>
<input disabled value="-" id="fps" type="text" class="bold">
</div>
</div>
</div>
<script>
let modelLoaded = false
let minFaceSize = 200
let maxDistance = 0.6
let minConfidence = 0.9
let forwardTimes = []
function onIncreaseMinFaceSize() {
minFaceSize = Math.min(faceapi.round(minFaceSize + 50), 300)
$('#minFaceSize').val(minFaceSize)
}
function onDecreaseMinFaceSize() {
minFaceSize = Math.max(faceapi.round(minFaceSize - 50), 50)
$('#minFaceSize').val(minFaceSize)
}
function updateTimeStats(timeInMs) {
forwardTimes = [timeInMs].concat(forwardTimes).slice(0, 30)
const avgTimeInMs = forwardTimes.reduce((total, t) => total + t) / forwardTimes.length
$('#time').val(`${Math.round(avgTimeInMs)} ms`)
$('#fps').val(`${faceapi.round(1000 / avgTimeInMs)}`)
}
async function onPlay(videoEl) {
if(videoEl.paused || videoEl.ended || !modelLoaded)
return false
const { width, height } = faceapi.getMediaDimensions(videoEl)
const canvas = $('#overlay').get(0)
canvas.width = width
canvas.height = height
const mtcnnParams = {
minFaceSize
}
const ts = Date.now()
const fullFaceDescriptions = (await faceapi.allFacesMtcnn(videoEl, mtcnnParams))
.map(fd => fd.forSize(width, height))
updateTimeStats(Date.now() - ts)
fullFaceDescriptions.forEach(({ detection, landmarks, descriptor }) => {
faceapi.drawDetection('overlay', [detection], { withScore: false })
faceapi.drawLandmarks('overlay', landmarks.forSize(width, height), { lineWidth: 4, color: 'red' })
const bestMatch = getBestMatch(trainDescriptorsByClass, descriptor)
const text = `${bestMatch.distance < maxDistance ? bestMatch.className : 'unkown'} (${bestMatch.distance})`
const { x, y, height: boxHeight } = detection.getBox()
faceapi.drawText(
canvas.getContext('2d'),
x,
y + boxHeight,
text,
Object.assign(faceapi.getDefaultDrawOptions(), { color: 'red', fontSize: 16 })
)
})
setTimeout(() => onPlay(videoEl))
}
async function run() {
await faceapi.loadMtcnnModel('/')
await faceapi.loadFaceRecognitionModel('/')
// init reference data, e.g. compute a face descriptor for each class
trainDescriptorsByClass = await initTrainDescriptorsByClass(faceapi.recognitionNet)
modelLoaded = true
// try to access users webcam and stream the images
// to the video element
const videoEl = $('#inputVideo').get(0)
navigator.getUserMedia(
{ video: {} },
stream => videoEl.srcObject = stream,
err => console.error(err)
)
$('#loader').hide()
}
$(document).ready(function() {
renderNavBar('#navbar', 'mtcnn_face_recognition_webcam')
run()
})
</script>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<img id="inputImg" src="" style="max-width: 800px;" />
<canvas id="overlay" />
</div>
<div class="row side-by-side">
<div id="selectList"></div>
<div class="row">
<label for="imgUrlInput">Get image from URL:</label>
<input id="imgUrlInput" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="loadImageFromUrl()"
>
Ok
</button>
</div>
<div class="row side-by-side">
<div class="row input-field" style="margin-right: 20px;">
<select id="sizeType">
<option value="" disabled selected>Input Size:</option>
<option value="xs">XS: 224 x 224</option>
<option value="sm">SM: 320 x 320</option>
<option value="md">MD: 416 x 416</option>
<option value="lg">LG: 608 x 608</option>
</select>
<label>Input Size</label>
</div>
<div class="row">
<label for="scoreThreshold">Score Threshold:</label>
<input disabled value="0.5" id="scoreThreshold" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseThreshold()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseThreshold()"
>
<i class="material-icons left">+</i>
</button>
</div>
</div>
<script>
let scoreThreshold = 0.5
let sizeType = 'lg'
function onIncreaseThreshold() {
scoreThreshold = Math.min(faceapi.round(scoreThreshold + 0.1), 1.0)
$('#scoreThreshold').val(scoreThreshold)
updateResults()
}
function onDecreaseThreshold() {
scoreThreshold = Math.max(faceapi.round(scoreThreshold - 0.1), 0.1)
$('#scoreThreshold').val(scoreThreshold)
updateResults()
}
function onSizeTypeChanged(e, c) {
sizeType = e.target.value
$('#sizeType').val(sizeType)
updateResults()
}
async function loadImageFromUrl(url) {
const img = await requestExternalImage($('#imgUrlInput').val())
$('#inputImg').get(0).src = img.src
updateResults()
}
async function updateResults() {
const inputImgEl = $('#inputImg').get(0)
const { width, height } = inputImgEl
const canvas = $('#overlay').get(0)
canvas.width = width
canvas.height = height
const forwardParams = {
inputSize: sizeType,
scoreThreshold
}
const detections = await faceapi.tinyYolov2(inputImgEl, forwardParams)
faceapi.drawDetection('overlay', detections.map(det => det.forSize(width, height)))
}
async function onSelectionChanged(uri) {
const imgBuf = await fetchImage(uri)
$(`#inputImg`).get(0).src = (await faceapi.bufferToImage(imgBuf)).src
updateResults()
}
async function run() {
await faceapi.loadTinyYolov2Model('/')
$('#loader').hide()
onSelectionChanged($('#selectList select').val())
}
$(document).ready(function() {
renderNavBar('#navbar', 'tiny_yolov2_face_detection')
renderImageSelectList(
'#selectList',
async (uri) => {
await onSelectionChanged(uri)
},
'bbt1.jpg'
)
const sizeTypeSelect = $('#sizeType')
sizeTypeSelect.val(sizeType)
sizeTypeSelect.on('change', onSizeTypeChanged)
sizeTypeSelect.material_select()
run()
})
</script>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<video src="media/bbt.mp4" onplay="onPlay(this)" id="inputVideo" autoplay muted></video>
<canvas id="overlay" />
</div>
<div class="row side-by-side">
<div class="row input-field" style="margin-right: 20px;">
<select id="sizeType">
<option value="" disabled selected>Input Size:</option>
<option value="xs">XS: 224 x 224</option>
<option value="sm">SM: 320 x 320</option>
<option value="md">MD: 416 x 416</option>
<option value="lg">LG: 608 x 608</option>
</select>
<label>Input Size</label>
</div>
<div class="row">
<label for="scoreThreshold">Score Threshold:</label>
<input disabled value="0.5" id="scoreThreshold" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseThreshold()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseThreshold()"
>
<i class="material-icons left">+</i>
</button>
</div>
<div class="row side-by-side">
<div class="row">
<label for="time">Time:</label>
<input disabled value="-" id="time" type="text" class="bold">
</div>
<div class="row">
<label for="fps">Estimated Fps:</label>
<input disabled value="-" id="fps" type="text" class="bold">
</div>
</div>
</div>
<script>
let scoreThreshold = 0.5
let sizeType = 'md'
let modelLoaded = false
let forwardTimes = []
function updateTimeStats(timeInMs) {
forwardTimes = [timeInMs].concat(forwardTimes).slice(0, 30)
const avgTimeInMs = forwardTimes.reduce((total, t) => total + t) / forwardTimes.length
$('#time').val(`${Math.round(avgTimeInMs)} ms`)
$('#fps').val(`${faceapi.round(1000 / avgTimeInMs)}`)
}
function onIncreaseThreshold() {
scoreThreshold = Math.min(faceapi.round(scoreThreshold + 0.1), 1.0)
$('#scoreThreshold').val(scoreThreshold)
}
function onDecreaseThreshold() {
scoreThreshold = Math.max(faceapi.round(scoreThreshold - 0.1), 0.1)
$('#scoreThreshold').val(scoreThreshold)
}
function onSizeTypeChanged(e, c) {
sizeType = e.target.value
$('#sizeType').val(sizeType)
}
async function onPlay(videoEl) {
if(videoEl.paused || videoEl.ended || !modelLoaded)
return false
const { width, height } = faceapi.getMediaDimensions(videoEl)
const canvas = $('#overlay').get(0)
canvas.width = width
canvas.height = height
const forwardParams = {
inputSize: sizeType,
scoreThreshold
}
const ts = Date.now()
result = await faceapi.tinyYolov2(videoEl, forwardParams)
updateTimeStats(Date.now() - ts)
faceapi.drawDetection('overlay', result.map(det => det.forSize(width, height)))
setTimeout(() => onPlay(videoEl))
}
async function loadNetWeights(uri) {
return new Float32Array(await (await fetch(uri)).arrayBuffer())
}
async function run() {
await faceapi.loadTinyYolov2Model('/')
modelLoaded = true
onPlay($('#inputVideo').get(0))
$('#loader').hide()
}
$(document).ready(function() {
renderNavBar('#navbar', 'tiny_yolov2_face_detection_video')
const sizeTypeSelect = $('#sizeType')
sizeTypeSelect.val(sizeType)
sizeTypeSelect.on('change', onSizeTypeChanged)
sizeTypeSelect.material_select()
run()
})
</script>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<video onplay="onPlay(this)" id="inputVideo" autoplay muted></video>
<canvas id="overlay" />
</div>
<div class="row side-by-side">
<div class="row input-field" style="margin-right: 20px;">
<select id="sizeType">
<option value="" disabled selected>Input Size:</option>
<option value="160">160 x 160</option>
<option value="224">224 x 224</option>
<option value="320">320 x 320</option>
<option value="416">416 x 416</option>
<option value="608">608 x 608</option>
</select>
<label>Input Size</label>
</div>
<div class="row">
<label for="scoreThreshold">Score Threshold:</label>
<input disabled value="0.5" id="scoreThreshold" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseThreshold()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseThreshold()"
>
<i class="material-icons left">+</i>
</button>
</div>
<div class="row side-by-side">
<div class="row">
<label for="time">Time:</label>
<input disabled value="-" id="time" type="text" class="bold">
</div>
<div class="row">
<label for="fps">Estimated Fps:</label>
<input disabled value="-" id="fps" type="text" class="bold">
</div>
</div>
</div>
<script>
let scoreThreshold = 0.5
let sizeType = '160'
let modelLoaded = false
let forwardTimes = []
function updateTimeStats(timeInMs) {
forwardTimes = [timeInMs].concat(forwardTimes).slice(0, 30)
const avgTimeInMs = forwardTimes.reduce((total, t) => total + t) / forwardTimes.length
$('#time').val(`${Math.round(avgTimeInMs)} ms`)
$('#fps').val(`${faceapi.round(1000 / avgTimeInMs)}`)
}
function onIncreaseThreshold() {
scoreThreshold = Math.min(faceapi.round(scoreThreshold + 0.1), 1.0)
$('#scoreThreshold').val(scoreThreshold)
}
function onDecreaseThreshold() {
scoreThreshold = Math.max(faceapi.round(scoreThreshold - 0.1), 0.1)
$('#scoreThreshold').val(scoreThreshold)
}
function onSizeTypeChanged(e, c) {
sizeType = e.target.value
$('#sizeType').val(sizeType)
}
async function onPlay(videoEl) {
if(videoEl.paused || videoEl.ended || !modelLoaded)
return false
const { width, height } = faceapi.getMediaDimensions(videoEl)
const canvas = $('#overlay').get(0)
canvas.width = width
canvas.height = height
const forwardParams = {
inputSize: parseInt(sizeType),
scoreThreshold
}
const ts = Date.now()
result = await faceapi.tinyYolov2(videoEl, forwardParams)
updateTimeStats(Date.now() - ts)
faceapi.drawDetection('overlay', result.map(det => det.forSize(width, height)))
setTimeout(() => onPlay(videoEl))
}
async function loadNetWeights(uri) {
return new Float32Array(await (await fetch(uri)).arrayBuffer())
}
async function run() {
await faceapi.loadTinyYolov2Model('/')
modelLoaded = true
const videoEl = $('#inputVideo').get(0)
navigator.getUserMedia(
{ video: {} },
stream => videoEl.srcObject = stream,
err => console.error(err)
)
onPlay($('#inputVideo').get(0))
$('#loader').hide()
}
$(document).ready(function() {
renderNavBar('#navbar', 'tiny_yolov2_face_detection_webcam')
const sizeTypeSelect = $('#sizeType')
sizeTypeSelect.val(sizeType)
sizeTypeSelect.on('change', onSizeTypeChanged)
sizeTypeSelect.material_select()
run()
})
</script>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="commons.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<img id="inputImg" src="" style="max-width: 800px;" />
<canvas id="overlay" />
</div>
<div class="row side-by-side">
<div id="selectList"></div>
<div class="row">
<label for="imgUrlInput">Get image from URL:</label>
<input id="imgUrlInput" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="loadImageFromUrl()"
>
Ok
</button>
<p>
<input type="checkbox" id="useBatchProcessing" onchange="onChangeUseBatchProcessing(event)" />
<label for="useBatchProcessing">Use Batch Processing</label>
</p>
</div>
<div class="row side-by-side">
<div class="row input-field" style="margin-right: 20px;">
<select id="sizeType">
<option value="" disabled selected>Input Size:</option>
<option value="xs">XS: 224 x 224</option>
<option value="sm">SM: 320 x 320</option>
<option value="md">MD: 416 x 416</option>
<option value="lg">LG: 608 x 608</option>
</select>
<label>Input Size</label>
</div>
<div class="row">
<label for="scoreThreshold">Score Threshold:</label>
<input disabled value="0.5" id="scoreThreshold" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseThreshold()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseThreshold()"
>
<i class="material-icons left">+</i>
</button>
</div>
<div class="row side-by-side">
<div class="row">
<label for="maxDistance">Max Descriptor Distance:</label>
<input disabled value="0.6" id="maxDistance" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn button-sm"
onclick="onDecreaseMaxDistance()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn button-sm"
onclick="onIncreaseMaxDistance()"
>
<i class="material-icons left">+</i>
</button>
</div>
</div>
<script>
let maxDistance = 0.6
let useBatchProcessing = false
let trainDescriptorsByClass = []
let scoreThreshold = 0.5
let sizeType = 'lg'
function onIncreaseThreshold() {
scoreThreshold = Math.min(faceapi.round(scoreThreshold + 0.1), 1.0)
$('#scoreThreshold').val(scoreThreshold)
updateResults()
}
function onDecreaseThreshold() {
scoreThreshold = Math.max(faceapi.round(scoreThreshold - 0.1), 0.1)
$('#scoreThreshold').val(scoreThreshold)
updateResults()
}
function onSizeTypeChanged(e, c) {
sizeType = e.target.value
$('#sizeType').val(sizeType)
updateResults()
}
function onChangeUseBatchProcessing(e) {
useBatchProcessing = $(e.target).prop('checked')
}
function onIncreaseMaxDistance() {
maxDistance = Math.min(faceapi.round(maxDistance + 0.1), 1.0)
$('#maxDistance').val(maxDistance)
updateResults()
}
function onDecreaseMaxDistance() {
maxDistance = Math.max(faceapi.round(maxDistance - 0.1), 0.1)
$('#maxDistance').val(maxDistance)
updateResults()
}
async function loadImageFromUrl(url) {
const img = await requestExternalImage($('#imgUrlInput').val())
$('#inputImg').get(0).src = img.src
updateResults()
}
async function updateResults() {
const inputImgEl = $('#inputImg').get(0)
const { width, height } = inputImgEl
const canvas = $('#overlay').get(0)
canvas.width = width
canvas.height = height
const forwardParams = {
inputSize: sizeType,
scoreThreshold
}
const fullFaceDescriptions = (await faceapi.allFacesTinyYolov2(inputImgEl, forwardParams, useBatchProcessing))
.map(fd => fd.forSize(width, height))
fullFaceDescriptions.forEach(({ detection, descriptor }) => {
faceapi.drawDetection('overlay', [detection], { withScore: false })
const bestMatch = getBestMatch(trainDescriptorsByClass, descriptor)
const text = `${bestMatch.distance < maxDistance ? bestMatch.className : 'unkown'} (${bestMatch.distance})`
const { x, y, height: boxHeight } = detection.getBox()
faceapi.drawText(
canvas.getContext('2d'),
x,
y + boxHeight,
text,
Object.assign(faceapi.getDefaultDrawOptions(), { color: 'red', fontSize: 16 })
)
})
}
async function onSelectionChanged(uri) {
const imgBuf = await fetchImage(uri)
$(`#inputImg`).get(0).src = (await faceapi.bufferToImage(imgBuf)).src
updateResults()
}
async function run() {
await faceapi.loadTinyYolov2Model('/')
await faceapi.loadFaceLandmarkModel('/')
await faceapi.loadFaceRecognitionModel('/')
trainDescriptorsByClass = await initTrainDescriptorsByClass(faceapi.recognitionNet, 1)
$('#loader').hide()
onSelectionChanged($('#selectList select').val())
}
$(document).ready(function() {
renderNavBar('#navbar', 'tiny_yolov2_face_recognition')
renderImageSelectList(
'#selectList',
async (uri) => {
await onSelectionChanged(uri)
},
'bbt1.jpg'
)
const sizeTypeSelect = $('#sizeType')
sizeTypeSelect.val(sizeType)
sizeTypeSelect.on('change', onSizeTypeChanged)
sizeTypeSelect.material_select()
run()
})
</script>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="js/commons.js"></script>
<script src="js/drawing.js"></script>
<script src="js/faceDetectionControls.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<video src="media/bbt.mp4" id="inputVideo" autoplay muted loop></video>
<canvas id="overlay" />
</div>
<div class="row side-by-side">
<!-- face_detector_selection_control -->
<div id="face_detector_selection_control" class="row input-field" style="margin-right: 20px;">
<select id="selectFaceDetector">
<option value="ssd_mobilenetv1">SSD Mobilenet V1</option>
<option value="tiny_face_detector">Tiny Face Detector</option>
<option value="mtcnn">MTCNN</option>
</select>
<label>Select Face Detector</label>
</div>
<!-- face_detector_selection_control -->
<!-- check boxes -->
<div class="row" style="width: 220px;">
<input type="checkbox" id="withFaceLandmarksCheckbox" onchange="onChangeWithFaceLandmarks(event)" />
<label for="withFaceLandmarksCheckbox">Detect Face Landmarks</label>
<input type="checkbox" id="hideBoundingBoxesCheckbox" onchange="onChangeHideBoundingBoxes(event)" />
<label for="hideBoundingBoxesCheckbox">Hide Bounding Boxes</label>
</div>
<!-- check boxes -->
<!-- fps_meter -->
<div id="fps_meter" class="row side-by-side">
<div>
<label for="time">Time:</label>
<input disabled value="-" id="time" type="text" class="bold">
<label for="fps">Estimated Fps:</label>
<input disabled value="-" id="fps" type="text" class="bold">
</div>
</div>
<!-- fps_meter -->
</div>
<!-- ssd_mobilenetv1_controls -->
<span id="ssd_mobilenetv1_controls">
<div class="row side-by-side">
<div class="row">
<label for="minConfidence">Min Confidence:</label>
<input disabled value="0.5" id="minConfidence" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseMinConfidence()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseMinConfidence()"
>
<i class="material-icons left">+</i>
</button>
</div>
</span>
<!-- ssd_mobilenetv1_controls -->
<!-- tiny_face_detector_controls -->
<span id="tiny_face_detector_controls">
<div class="row side-by-side">
<div class="row input-field" style="margin-right: 20px;">
<select id="inputSize">
<option value="" disabled selected>Input Size:</option>
<option value="160">160 x 160</option>
<option value="224">224 x 224</option>
<option value="320">320 x 320</option>
<option value="416">416 x 416</option>
<option value="512">512 x 512</option>
<option value="608">608 x 608</option>
</select>
<label>Input Size</label>
</div>
<div class="row">
<label for="scoreThreshold">Score Threshold:</label>
<input disabled value="0.5" id="scoreThreshold" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseScoreThreshold()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseScoreThreshold()"
>
<i class="material-icons left">+</i>
</button>
</div>
</span>
<!-- tiny_face_detector_controls -->
<!-- mtcnn_controls -->
<span id="mtcnn_controls">
<div class="row side-by-side">
<div class="row">
<label for="minFaceSize">Minimum Face Size:</label>
<input disabled value="20" id="minFaceSize" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseMinFaceSize()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseMinFaceSize()"
>
<i class="material-icons left">+</i>
</button>
</div>
</span>
<!-- mtcnn_controls -->
</body>
<script>
let forwardTimes = []
let withFaceLandmarks = false
let withBoxes = true
function onChangeWithFaceLandmarks(e) {
withFaceLandmarks = $(e.target).prop('checked')
}
function onChangeHideBoundingBoxes(e) {
withBoxes = !$(e.target).prop('checked')
}
function updateTimeStats(timeInMs) {
forwardTimes = [timeInMs].concat(forwardTimes).slice(0, 30)
const avgTimeInMs = forwardTimes.reduce((total, t) => total + t) / forwardTimes.length
$('#time').val(`${Math.round(avgTimeInMs)} ms`)
$('#fps').val(`${faceapi.round(1000 / avgTimeInMs)}`)
}
async function onPlay(videoEl) {
if(!videoEl.currentTime || videoEl.paused || videoEl.ended || !isFaceDetectionModelLoaded())
return setTimeout(() => onPlay(videoEl))
const options = getFaceDetectorOptions()
const ts = Date.now()
const faceDetectionTask = faceapi.detectAllFaces(videoEl, options)
const results = withFaceLandmarks
? await faceDetectionTask.withFaceLandmarks()
: await faceDetectionTask
updateTimeStats(Date.now() - ts)
const drawFunction = withFaceLandmarks
? drawLandmarks
: drawDetections
drawFunction(videoEl, $('#overlay').get(0), results, withBoxes)
setTimeout(() => onPlay(videoEl))
}
async function run() {
// load face detection and face landmark models
await changeFaceDetector(TINY_FACE_DETECTOR)
await faceapi.loadFaceLandmarkModel('/')
changeInputSize(416)
// start processing frames
onPlay($('#inputVideo').get(0))
}
function updateResults() {}
$(document).ready(function() {
renderNavBar('#navbar', 'video_face_tracking')
initFaceDetectionControls()
run()
})
</script>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<script src="face-api.js"></script>
<script src="js/commons.js"></script>
<script src="js/drawing.js"></script>
<script src="js/faceDetectionControls.js"></script>
<link rel="stylesheet" href="styles.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js"></script>
</head>
<body>
<div id="navbar"></div>
<div class="center-content page-container">
<div class="progress" id="loader">
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<video onplay="onPlay(this)" id="inputVideo" autoplay muted></video>
<canvas id="overlay" />
</div>
<div class="row side-by-side">
<!-- face_detector_selection_control -->
<div id="face_detector_selection_control" class="row input-field" style="margin-right: 20px;">
<select id="selectFaceDetector">
<option value="ssd_mobilenetv1">SSD Mobilenet V1</option>
<option value="tiny_face_detector">Tiny Face Detector</option>
<option value="mtcnn">MTCNN</option>
</select>
<label>Select Face Detector</label>
</div>
<!-- face_detector_selection_control -->
<!-- check boxes -->
<div class="row" style="width: 220px;">
<input type="checkbox" id="withFaceLandmarksCheckbox" onchange="onChangeWithFaceLandmarks(event)" />
<label for="withFaceLandmarksCheckbox">Detect Face Landmarks</label>
<input type="checkbox" id="hideBoundingBoxesCheckbox" onchange="onChangeHideBoundingBoxes(event)" />
<label for="hideBoundingBoxesCheckbox">Hide Bounding Boxes</label>
</div>
<!-- check boxes -->
<!-- fps_meter -->
<div id="fps_meter" class="row side-by-side">
<div>
<label for="time">Time:</label>
<input disabled value="-" id="time" type="text" class="bold">
<label for="fps">Estimated Fps:</label>
<input disabled value="-" id="fps" type="text" class="bold">
</div>
</div>
<!-- fps_meter -->
</div>
<!-- ssd_mobilenetv1_controls -->
<span id="ssd_mobilenetv1_controls">
<div class="row side-by-side">
<div class="row">
<label for="minConfidence">Min Confidence:</label>
<input disabled value="0.5" id="minConfidence" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseMinConfidence()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseMinConfidence()"
>
<i class="material-icons left">+</i>
</button>
</div>
</span>
<!-- ssd_mobilenetv1_controls -->
<!-- tiny_face_detector_controls -->
<span id="tiny_face_detector_controls">
<div class="row side-by-side">
<div class="row input-field" style="margin-right: 20px;">
<select id="inputSize">
<option value="" disabled selected>Input Size:</option>
<option value="128">128 x 128</option>
<option value="160">160 x 160</option>
<option value="224">224 x 224</option>
<option value="320">320 x 320</option>
<option value="416">416 x 416</option>
<option value="512">512 x 512</option>
<option value="608">608 x 608</option>
</select>
<label>Input Size</label>
</div>
<div class="row">
<label for="scoreThreshold">Score Threshold:</label>
<input disabled value="0.5" id="scoreThreshold" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseScoreThreshold()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseScoreThreshold()"
>
<i class="material-icons left">+</i>
</button>
</div>
</span>
<!-- tiny_face_detector_controls -->
<!-- mtcnn_controls -->
<span id="mtcnn_controls">
<div class="row side-by-side">
<div class="row">
<label for="minFaceSize">Minimum Face Size:</label>
<input disabled value="20" id="minFaceSize" type="text" class="bold">
</div>
<button
class="waves-effect waves-light btn"
onclick="onDecreaseMinFaceSize()"
>
<i class="material-icons left">-</i>
</button>
<button
class="waves-effect waves-light btn"
onclick="onIncreaseMinFaceSize()"
>
<i class="material-icons left">+</i>
</button>
</div>
</span>
<!-- mtcnn_controls -->
</body>
<script>
let forwardTimes = []
let withFaceLandmarks = false
let withBoxes = true
function onChangeWithFaceLandmarks(e) {
withFaceLandmarks = $(e.target).prop('checked')
}
function onChangeHideBoundingBoxes(e) {
withBoxes = !$(e.target).prop('checked')
}
function updateTimeStats(timeInMs) {
forwardTimes = [timeInMs].concat(forwardTimes).slice(0, 30)
const avgTimeInMs = forwardTimes.reduce((total, t) => total + t) / forwardTimes.length
$('#time').val(`${Math.round(avgTimeInMs)} ms`)
$('#fps').val(`${faceapi.round(1000 / avgTimeInMs)}`)
}
async function onPlay() {
const videoEl = $('#inputVideo').get(0)
if(videoEl.paused || videoEl.ended || !isFaceDetectionModelLoaded())
return setTimeout(() => onPlay())
const options = getFaceDetectorOptions()
const ts = Date.now()
const faceDetectionTask = faceapi.detectSingleFace(videoEl, options)
const result = withFaceLandmarks
? await faceDetectionTask.withFaceLandmarks()
: await faceDetectionTask
updateTimeStats(Date.now() - ts)
const drawFunction = withFaceLandmarks
? drawLandmarks
: drawDetections
if (result) {
drawFunction(videoEl, $('#overlay').get(0), [result], withBoxes)
}
setTimeout(() => onPlay())
}
async function run() {
// load face detection and face landmark models
await changeFaceDetector(TINY_FACE_DETECTOR)
await faceapi.loadFaceLandmarkModel('/')
changeInputSize(128)
// try to access users webcam and stream the images
// to the video element
const stream = await navigator.mediaDevices.getUserMedia({ video: {} })
const videoEl = $('#inputVideo').get(0)
videoEl.srcObject = stream
}
function updateResults() {}
$(document).ready(function() {
renderNavBar('#navbar', 'webcam_face_tracking')
initFaceDetectionControls()
run()
})
</script>
</body>
</html>
\ No newline at end of file
......@@ -14,6 +14,20 @@ const dataFiles = [
nocache: false
}))
const exclude = process.env.UUT
? [
'dom',
'faceLandmarkNet',
'faceRecognitionNet',
'ssdMobilenetv1',
'tinyFaceDetector',
'mtcnn',
'tinyYolov2'
]
.filter(ex => ex !== process.env.UUT)
.map(ex => `test/tests/${ex}/*.ts`)
: []
module.exports = function(config) {
config.set({
frameworks: ['jasmine', 'karma-typescript'],
......@@ -21,6 +35,7 @@ module.exports = function(config) {
'src/**/*.ts',
'test/**/*.ts'
].concat(dataFiles),
exclude,
preprocessors: {
'**/*.ts': ['karma-typescript']
},
......
......@@ -4903,21 +4903,21 @@
}
},
"tfjs-image-recognition-base": {
"version": "0.1.2",
"resolved": "https://registry.npmjs.org/tfjs-image-recognition-base/-/tfjs-image-recognition-base-0.1.2.tgz",
"integrity": "sha512-+mnRdQ6IxA2q2nTIRmwImmUidgE9qhhWOXVCPvgnA/g52CXeydgT4NpOHHCcteWgqwg5q6ju1p0epWKgP1k/dg==",
"version": "0.1.3",
"resolved": "https://registry.npmjs.org/tfjs-image-recognition-base/-/tfjs-image-recognition-base-0.1.3.tgz",
"integrity": "sha512-Vo1arsSkOxtlBedWxw7w2V/mbpp70izAJPu0Cl6WE62ZJ0kLL6TmFphGAr3zKaqrZ0VOyADVedDqFic3aH84RQ==",
"requires": {
"@tensorflow/tfjs-core": "0.13.2",
"tslib": "1.9.3"
}
},
"tfjs-tiny-yolov2": {
"version": "0.1.3",
"resolved": "https://registry.npmjs.org/tfjs-tiny-yolov2/-/tfjs-tiny-yolov2-0.1.3.tgz",
"integrity": "sha512-oY77SnFFzOizXRYPfRcHO9LuqB+dJkXvOBZz6b04E5HrJvVkCJu/MDJBIHuMdbe2TuVmPIhcX+Yr1bqLp3gXKw==",
"version": "0.2.1",
"resolved": "https://registry.npmjs.org/tfjs-tiny-yolov2/-/tfjs-tiny-yolov2-0.2.1.tgz",
"integrity": "sha512-HSdBu6dMyQdtueY32wSO+5IajXDrvu7MufvUBaLD0CubKbfJWM1JogsONUkvp3N948UAI2/K35p9+eEP2woXpw==",
"requires": {
"@tensorflow/tfjs-core": "0.13.2",
"tfjs-image-recognition-base": "0.1.2",
"tfjs-image-recognition-base": "0.1.3",
"tslib": "1.9.3"
}
},
......
......@@ -11,7 +11,14 @@
"tsc": "tsc",
"tsc-es6": "tsc --p tsconfig.es6.json",
"build": "npm run rollup && npm run rollup-min && npm run tsc && npm run tsc-es6",
"test": "karma start"
"test": "karma start",
"test-facelandmarknets": "set UUT=faceLandmarkNet&& karma start",
"test-facerecognitionnet": "set UUT=faceRecognitionNet&& karma start",
"test-ssdmobilenetv1": "set UUT=ssdMobilenetv1&& karma start",
"test-tinyfacedetector": "set UUT=tinyFaceDetector&& karma start",
"test-mtcnn": "set UUT=mtcnn&& karma start",
"test-tinyyolov2": "set UUT=tinyYolov2&& karma start",
"docs": "typedoc --options ./typedoc.config.js ./src"
},
"keywords": [
"face",
......@@ -24,8 +31,8 @@
"license": "MIT",
"dependencies": {
"@tensorflow/tfjs-core": "^0.13.2",
"tfjs-image-recognition-base": "^0.1.2",
"tfjs-tiny-yolov2": "^0.1.3",
"tfjs-image-recognition-base": "^0.1.3",
"tfjs-tiny-yolov2": "^0.2.1",
"tslib": "^1.9.3"
},
"devDependencies": {
......
import { Point, Rect, TNetInput } from 'tfjs-image-recognition-base';
import { TinyYolov2Types } from 'tfjs-tiny-yolov2';
import { TinyYolov2 } from '.';
import { FaceDetection } from './classes/FaceDetection';
import { FaceLandmarks68 } from './classes/FaceLandmarks68';
import { FullFaceDescription } from './classes/FullFaceDescription';
import { extractFaces } from './dom';
import { FaceDetectionNet } from './faceDetectionNet/FaceDetectionNet';
import { FaceLandmark68Net } from './faceLandmarkNet/FaceLandmark68Net';
import { FaceRecognitionNet } from './faceRecognitionNet/FaceRecognitionNet';
import { Mtcnn } from './mtcnn/Mtcnn';
import { MtcnnForwardParams } from './mtcnn/types';
function computeDescriptorsFactory(
recognitionNet: FaceRecognitionNet
) {
return async function(input: TNetInput, alignedFaceBoxes: Rect[], useBatchProcessing: boolean) {
const alignedFaceCanvases = await extractFaces(input, alignedFaceBoxes)
const descriptors = useBatchProcessing
? await recognitionNet.computeFaceDescriptor(alignedFaceCanvases) as Float32Array[]
: await Promise.all(alignedFaceCanvases.map(
canvas => recognitionNet.computeFaceDescriptor(canvas)
)) as Float32Array[]
return descriptors
}
}
function allFacesFactory(
detectFaces: (input: TNetInput) => Promise<FaceDetection[]>,
landmarkNet: FaceLandmark68Net,
recognitionNet: FaceRecognitionNet
) {
const computeDescriptors = computeDescriptorsFactory(recognitionNet)
return async function(
input: TNetInput,
useBatchProcessing: boolean = false
): Promise<FullFaceDescription[]> {
const detections = await detectFaces(input)
const faceCanvases = await extractFaces(input, detections)
const faceLandmarksByFace = useBatchProcessing
? await landmarkNet.detectLandmarks(faceCanvases) as FaceLandmarks68[]
: await Promise.all(faceCanvases.map(
canvas => landmarkNet.detectLandmarks(canvas)
)) as FaceLandmarks68[]
const alignedFaceBoxes = faceLandmarksByFace.map(
(landmarks, i) => landmarks.align(detections[i].getBox())
)
const descriptors = await computeDescriptors(input, alignedFaceBoxes, useBatchProcessing)
return detections.map((detection, i) =>
new FullFaceDescription(
detection,
faceLandmarksByFace[i].shiftByPoint<FaceLandmarks68>(
new Point(detection.box.x, detection.box.y)
),
descriptors[i]
)
)
}
}
export function allFacesSsdMobilenetv1Factory(
ssdMobilenetv1: FaceDetectionNet,
landmarkNet: FaceLandmark68Net,
recognitionNet: FaceRecognitionNet
) {
return async function(
input: TNetInput,
minConfidence: number = 0.8,
useBatchProcessing: boolean = false
): Promise<FullFaceDescription[]> {
const detectFaces = (input: TNetInput) => ssdMobilenetv1.locateFaces(input, minConfidence)
const allFaces = allFacesFactory(detectFaces, landmarkNet, recognitionNet)
return allFaces(input, useBatchProcessing)
}
}
export function allFacesTinyYolov2Factory(
tinyYolov2: TinyYolov2,
landmarkNet: FaceLandmark68Net,
recognitionNet: FaceRecognitionNet
) {
return async function(
input: TNetInput,
forwardParams: TinyYolov2Types.TinyYolov2ForwardParams = {},
useBatchProcessing: boolean = false
): Promise<FullFaceDescription[]> {
const detectFaces = (input: TNetInput) => tinyYolov2.locateFaces(input, forwardParams)
const allFaces = allFacesFactory(detectFaces, landmarkNet, recognitionNet)
return allFaces(input, useBatchProcessing)
}
}
export function allFacesMtcnnFactory(
mtcnn: Mtcnn,
recognitionNet: FaceRecognitionNet
) {
const computeDescriptors = computeDescriptorsFactory(recognitionNet)
return async function(
input: TNetInput,
mtcnnForwardParams: MtcnnForwardParams = {},
useBatchProcessing: boolean = false
): Promise<FullFaceDescription[]> {
const results = await mtcnn.forward(input, mtcnnForwardParams)
const alignedFaceBoxes = results.map(
({ faceLandmarks }) => faceLandmarks.align()
)
const descriptors = await computeDescriptors(input, alignedFaceBoxes, useBatchProcessing)
return results.map(({ faceDetection, faceLandmarks }, i) =>
new FullFaceDescription(
faceDetection,
faceLandmarks,
descriptors[i]
)
)
}
}
\ No newline at end of file
import { Dimensions, ObjectDetection, Rect } from 'tfjs-image-recognition-base';
import { Box, IDimensions, ObjectDetection, Rect } from 'tfjs-image-recognition-base';
export class FaceDetection extends ObjectDetection {
export interface IFaceDetecion {
score: number
box: Box
}
export class FaceDetection extends ObjectDetection implements IFaceDetecion {
constructor(
score: number,
relativeBox: Rect,
imageDims: Dimensions
imageDims: IDimensions
) {
super(score, score, '', relativeBox, imageDims)
}
......
import { FaceDetection } from './FaceDetection';
import { FaceLandmarks } from './FaceLandmarks';
import { FaceLandmarks68 } from './FaceLandmarks68';
export interface IFaceDetectionWithLandmarks<TFaceLandmarks extends FaceLandmarks = FaceLandmarks68> {
detection: FaceDetection,
landmarks: TFaceLandmarks
}
export class FaceDetectionWithLandmarks<TFaceLandmarks extends FaceLandmarks = FaceLandmarks68>
implements IFaceDetectionWithLandmarks<TFaceLandmarks> {
private _detection: FaceDetection
private _unshiftedLandmarks: TFaceLandmarks
constructor(
detection: FaceDetection,
unshiftedLandmarks: TFaceLandmarks
) {
this._detection = detection
this._unshiftedLandmarks = unshiftedLandmarks
}
public get detection(): FaceDetection { return this._detection }
public get unshiftedLandmarks(): TFaceLandmarks { return this._unshiftedLandmarks }
public get alignedRect(): FaceDetection {
const rect = this.landmarks.align()
const { imageDims } = this.detection
return new FaceDetection(this._detection.score, rect.rescale(imageDims.reverse()), imageDims)
}
public get landmarks(): TFaceLandmarks {
const { x, y } = this.detection.box
return this._unshiftedLandmarks.shiftBy(x, y)
}
// aliases for backward compatibily
get faceDetection(): FaceDetection { return this.detection }
get faceLandmarks(): TFaceLandmarks { return this.landmarks }
public forSize(width: number, height: number): FaceDetectionWithLandmarks<TFaceLandmarks> {
const resizedDetection = this._detection.forSize(width, height)
const resizedLandmarks = this._unshiftedLandmarks.forSize<TFaceLandmarks>(resizedDetection.box.width, resizedDetection.box.height)
return new FaceDetectionWithLandmarks<TFaceLandmarks>(resizedDetection, resizedLandmarks)
}
}
\ No newline at end of file
import { Dimensions, getCenterPoint, Point, Rect } from 'tfjs-image-recognition-base';
import { Dimensions, getCenterPoint, IDimensions, Point, Rect } from 'tfjs-image-recognition-base';
import { FaceDetection } from './FaceDetection';
......@@ -7,65 +7,56 @@ const relX = 0.5
const relY = 0.43
const relScale = 0.45
export class FaceLandmarks {
protected _imageWidth: number
protected _imageHeight: number
export interface IFaceLandmarks {
positions: Point[]
shift: Point
}
export class FaceLandmarks implements IFaceLandmarks {
protected _shift: Point
protected _faceLandmarks: Point[]
protected _positions: Point[]
protected _imgDims: Dimensions
constructor(
relativeFaceLandmarkPositions: Point[],
imageDims: Dimensions,
imgDims: IDimensions,
shift: Point = new Point(0, 0)
) {
const { width, height } = imageDims
this._imageWidth = width
this._imageHeight = height
const { width, height } = imgDims
this._imgDims = new Dimensions(width, height)
this._shift = shift
this._faceLandmarks = relativeFaceLandmarkPositions.map(
this._positions = relativeFaceLandmarkPositions.map(
pt => pt.mul(new Point(width, height)).add(shift)
)
}
public getShift(): Point {
return new Point(this._shift.x, this._shift.y)
}
public getImageWidth(): number {
return this._imageWidth
}
public getImageHeight(): number {
return this._imageHeight
}
public getPositions(): Point[] {
return this._faceLandmarks
}
public getRelativePositions(): Point[] {
return this._faceLandmarks.map(
pt => pt.sub(this._shift).div(new Point(this._imageWidth, this._imageHeight))
public get shift(): Point { return new Point(this._shift.x, this._shift.y) }
public get imageWidth(): number { return this._imgDims.width }
public get imageHeight(): number { return this._imgDims.height }
public get positions(): Point[] { return this._positions }
public get relativePositions(): Point[] {
return this._positions.map(
pt => pt.sub(this._shift).div(new Point(this.imageWidth, this.imageHeight))
)
}
public forSize<T extends FaceLandmarks>(width: number, height: number): T {
return new (this.constructor as any)(
this.getRelativePositions(),
this.relativePositions,
{ width, height }
)
}
public shift<T extends FaceLandmarks>(x: number, y: number): T {
public shiftBy<T extends FaceLandmarks>(x: number, y: number): T {
return new (this.constructor as any)(
this.getRelativePositions(),
{ width: this._imageWidth, height: this._imageHeight },
this.relativePositions,
this._imgDims,
new Point(x, y)
)
}
public shiftByPoint<T extends FaceLandmarks>(pt: Point): T {
return this.shift(pt.x, pt.y)
return this.shiftBy(pt.x, pt.y)
}
/**
......@@ -84,10 +75,10 @@ export class FaceLandmarks {
): Rect {
if (detection) {
const box = detection instanceof FaceDetection
? detection.getBox().floor()
? detection.box.floor()
: detection
return this.shift(box.x, box.y).align()
return this.shiftBy(box.x, box.y).align()
}
const centers = this.getRefPointsForAlignment()
......@@ -103,7 +94,7 @@ export class FaceLandmarks {
const x = Math.floor(Math.max(0, refPoint.x - (relX * size)))
const y = Math.floor(Math.max(0, refPoint.y - (relY * size)))
return new Rect(x, y, Math.min(size, this._imageWidth + x), Math.min(size, this._imageHeight + y))
return new Rect(x, y, Math.min(size, this.imageWidth + x), Math.min(size, this.imageHeight + y))
}
protected getRefPointsForAlignment(): Point[] {
......
......@@ -5,7 +5,7 @@ import { FaceLandmarks } from './FaceLandmarks';
export class FaceLandmarks5 extends FaceLandmarks {
protected getRefPointsForAlignment(): Point[] {
const pts = this.getPositions()
const pts = this.positions
return [
pts[0],
pts[1],
......
......@@ -2,34 +2,33 @@ import { getCenterPoint, Point } from 'tfjs-image-recognition-base';
import { FaceLandmarks } from '../classes/FaceLandmarks';
export class FaceLandmarks68 extends FaceLandmarks {
public getJawOutline(): Point[] {
return this._faceLandmarks.slice(0, 17)
return this.positions.slice(0, 17)
}
public getLeftEyeBrow(): Point[] {
return this._faceLandmarks.slice(17, 22)
return this.positions.slice(17, 22)
}
public getRightEyeBrow(): Point[] {
return this._faceLandmarks.slice(22, 27)
return this.positions.slice(22, 27)
}
public getNose(): Point[] {
return this._faceLandmarks.slice(27, 36)
return this.positions.slice(27, 36)
}
public getLeftEye(): Point[] {
return this._faceLandmarks.slice(36, 42)
return this.positions.slice(36, 42)
}
public getRightEye(): Point[] {
return this._faceLandmarks.slice(42, 48)
return this.positions.slice(42, 48)
}
public getMouth(): Point[] {
return this._faceLandmarks.slice(48, 68)
return this.positions.slice(48, 68)
}
protected getRefPointsForAlignment(): Point[] {
......
import { round } from 'tfjs-image-recognition-base';
export interface IFaceMatch {
label: string
distance: number
}
export class FaceMatch implements IFaceMatch {
private _label: string
private _distance: number
constructor(label: string, distance: number) {
this._label = label
this._distance = distance
}
public get label(): string { return this._label }
public get distance(): number { return this._distance }
public toString(withDistance: boolean = true): string {
return `${this.label}${withDistance ? ` (${round(this.distance)})` : ''}`
}
}
\ No newline at end of file
import { FaceDetection } from './FaceDetection';
import { FaceDetectionWithLandmarks, IFaceDetectionWithLandmarks } from './FaceDetectionWithLandmarks';
import { FaceLandmarks } from './FaceLandmarks';
import { FaceLandmarks68 } from './FaceLandmarks68';
export class FullFaceDescription {
constructor(
private _detection: FaceDetection,
private _landmarks: FaceLandmarks,
private _descriptor: Float32Array
) {}
export interface IFullFaceDescription<TFaceLandmarks extends FaceLandmarks = FaceLandmarks68>
extends IFaceDetectionWithLandmarks<TFaceLandmarks> {
public get detection(): FaceDetection {
return this._detection
}
detection: FaceDetection,
landmarks: TFaceLandmarks,
descriptor: Float32Array
}
public get landmarks(): FaceLandmarks {
return this._landmarks
export class FullFaceDescription<TFaceLandmarks extends FaceLandmarks = FaceLandmarks68>
extends FaceDetectionWithLandmarks<TFaceLandmarks>
implements IFullFaceDescription<TFaceLandmarks> {
private _descriptor: Float32Array
constructor(
detection: FaceDetection,
unshiftedLandmarks: TFaceLandmarks,
descriptor: Float32Array
) {
super(detection, unshiftedLandmarks)
this._descriptor = descriptor
}
public get descriptor(): Float32Array {
return this._descriptor
}
public forSize(width: number, height: number): FullFaceDescription {
return new FullFaceDescription(
this._detection.forSize(width, height),
this._landmarks.forSize(width, height),
this._descriptor
)
public forSize(width: number, height: number): FullFaceDescription<TFaceLandmarks> {
const { detection, landmarks } = super.forSize(width, height)
return new FullFaceDescription<TFaceLandmarks>(detection, landmarks, this.descriptor)
}
}
\ No newline at end of file
export class LabeledFaceDescriptors {
private _label: string
private _descriptors: Float32Array[]
constructor(label: string, descriptors: Float32Array[]) {
if (!(typeof label === 'string')) {
throw new Error('LabeledFaceDescriptors - constructor expected label to be a string')
}
if (!Array.isArray(descriptors) || descriptors.some(desc => !(desc instanceof Float32Array))) {
throw new Error('LabeledFaceDescriptors - constructor expected descriptors to be an array of Float32Array')
}
this._label = label
this._descriptors = descriptors
}
public get label(): string { return this._label }
public get descriptors(): Float32Array[] { return this._descriptors }
}
\ No newline at end of file
export * from './FaceDetection';
export * from './FaceDetectionWithLandmarks';
export * from './FaceLandmarks';
export * from './FaceLandmarks5';
export * from './FaceLandmarks68';
export * from './FaceMatch';
export * from './FullFaceDescription';
export * from './LabeledFaceDescriptors';
\ No newline at end of file
......@@ -44,6 +44,6 @@ export function drawLandmarks(
// else draw points
const ptOffset = lineWidth / 2
ctx.fillStyle = color
landmarks.getPositions().forEach(pt => ctx.fillRect(pt.x - ptOffset, pt.y - ptOffset, lineWidth, lineWidth))
landmarks.positions.forEach(pt => ctx.fillRect(pt.x - ptOffset, pt.y - ptOffset, lineWidth, lineWidth))
})
}
\ No newline at end of file
......@@ -27,7 +27,7 @@ export async function extractFaceTensors(
const boxes = detections.map(
det => det instanceof FaceDetection
? det.forSize(imgWidth, imgHeight).getBox()
? det.forSize(imgWidth, imgHeight).box
: det
)
.map(box => box.clipAtImageBorders(imgWidth, imgHeight))
......
......@@ -39,7 +39,7 @@ export async function extractFaces(
const ctx = getContext2dOrThrow(canvas)
const boxes = detections.map(
det => det instanceof FaceDetection
? det.forSize(canvas.width, canvas.height).getBox().floor()
? det.forSize(canvas.width, canvas.height).box.floor()
: det
)
.map(box => box.clipAtImageBorders(canvas.width, canvas.height))
......
import { FaceDetectionNet } from './FaceDetectionNet';
export * from './FaceDetectionNet';
export function createFaceDetectionNet(weights: Float32Array) {
const net = new FaceDetectionNet()
net.extractWeights(weights)
return net
}
export function faceDetectionNet(weights: Float32Array) {
console.warn('faceDetectionNet(weights: Float32Array) will be deprecated in future, use createFaceDetectionNet instead')
return createFaceDetectionNet(weights)
}
\ No newline at end of file
......@@ -38,7 +38,7 @@ function denseBlock(
export class FaceLandmark68Net extends FaceLandmark68NetBase<NetParams> {
constructor() {
super('FaceLandmark68LargeNet')
super('FaceLandmark68Net')
}
public runNet(input: NetInput): tf.Tensor2D {
......@@ -46,7 +46,7 @@ export class FaceLandmark68Net extends FaceLandmark68NetBase<NetParams> {
const { params } = this
if (!params) {
throw new Error('FaceLandmark68LargeNet - load model before inference')
throw new Error('FaceLandmark68Net - load model before inference')
}
return tf.tidy(() => {
......
import * as tf from '@tensorflow/tfjs-core';
import { isEven, NetInput, NeuralNetwork, Point, TNetInput, toNetInput, Dimensions } from 'tfjs-image-recognition-base';
import { IDimensions, isEven, NetInput, NeuralNetwork, Point, TNetInput, toNetInput } from 'tfjs-image-recognition-base';
import { FaceLandmarks68 } from '../classes/FaceLandmarks68';
......@@ -17,7 +17,7 @@ export class FaceLandmark68NetBase<NetParams> extends NeuralNetwork<NetParams> {
throw new Error(`${this.__name} - runNet not implemented`)
}
public postProcess(output: tf.Tensor2D, inputSize: number, originalDimensions: Dimensions[]): tf.Tensor2D {
public postProcess(output: tf.Tensor2D, inputSize: number, originalDimensions: IDimensions[]): tf.Tensor2D {
const inputDimensions = originalDimensions.map(({ width, height }) => {
const scale = inputSize / Math.max(height, width)
......
......@@ -10,8 +10,3 @@ export function createFaceLandmarkNet(weights: Float32Array) {
net.extractWeights(weights)
return net
}
\ No newline at end of file
export function faceLandmarkNet(weights: Float32Array) {
console.warn('faceLandmarkNet(weights: Float32Array) will be deprecated in future, use createFaceLandmarkNet instead')
return createFaceLandmarkNet(weights)
}
\ No newline at end of file
......@@ -7,8 +7,3 @@ export function createFaceRecognitionNet(weights: Float32Array) {
net.extractWeights(weights)
return net
}
\ No newline at end of file
export function faceRecognitionNet(weights: Float32Array) {
console.warn('faceRecognitionNet(weights: Float32Array) will be deprecated in future, use createFaceRecognitionNet instead')
return createFaceRecognitionNet(weights)
}
\ No newline at end of file
import * as tf from '@tensorflow/tfjs-core';
import { NetInput, TNetInput } from 'tfjs-image-recognition-base';
import { TinyYolov2Types } from 'tfjs-tiny-yolov2';
import { allFacesMtcnnFactory, allFacesSsdMobilenetv1Factory, allFacesTinyYolov2Factory } from './allFacesFactory';
import { FaceDetection } from './classes/FaceDetection';
import { FaceLandmarks68 } from './classes/FaceLandmarks68';
import { FullFaceDescription } from './classes/FullFaceDescription';
import { FaceDetectionNet } from './faceDetectionNet/FaceDetectionNet';
import { FaceLandmark68Net } from './faceLandmarkNet/FaceLandmark68Net';
import { FaceLandmark68TinyNet } from './faceLandmarkNet/FaceLandmark68TinyNet';
import { FaceRecognitionNet } from './faceRecognitionNet/FaceRecognitionNet';
import { Mtcnn } from './mtcnn/Mtcnn';
import { MtcnnForwardParams, MtcnnResult } from './mtcnn/types';
import { TinyYolov2 } from './tinyYolov2/TinyYolov2';
export const detectionNet = new FaceDetectionNet()
export const landmarkNet = new FaceLandmark68Net()
export const recognitionNet = new FaceRecognitionNet()
// nets need more specific names, to avoid ambiguity in future
// when alternative net implementations are provided
export const nets = {
ssdMobilenetv1: detectionNet,
faceLandmark68Net: landmarkNet,
faceLandmark68TinyNet: new FaceLandmark68TinyNet(),
faceRecognitionNet: recognitionNet,
mtcnn: new Mtcnn(),
tinyYolov2: new TinyYolov2()
}
export function loadSsdMobilenetv1Model(url: string) {
return nets.ssdMobilenetv1.load(url)
}
export function loadFaceLandmarkModel(url: string) {
return nets.faceLandmark68Net.load(url)
}
export function loadFaceLandmarkTinyModel(url: string) {
return nets.faceLandmark68TinyNet.load(url)
}
export function loadFaceRecognitionModel(url: string) {
return nets.faceRecognitionNet.load(url)
}
export function loadMtcnnModel(url: string) {
return nets.mtcnn.load(url)
}
export function loadTinyYolov2Model(url: string) {
return nets.tinyYolov2.load(url)
}
export function loadFaceDetectionModel(url: string) {
return loadSsdMobilenetv1Model(url)
}
export function loadModels(url: string) {
console.warn('loadModels will be deprecated in future')
return Promise.all([
loadSsdMobilenetv1Model(url),
loadFaceLandmarkModel(url),
loadFaceRecognitionModel(url),
loadMtcnnModel(url),
loadTinyYolov2Model(url)
])
}
export function locateFaces(
input: TNetInput,
minConfidence?: number,
maxResults?: number
): Promise<FaceDetection[]> {
return nets.ssdMobilenetv1.locateFaces(input, minConfidence, maxResults)
}
export const ssdMobilenetv1 = locateFaces
export function detectLandmarks(
input: TNetInput
): Promise<FaceLandmarks68 | FaceLandmarks68[]> {
return nets.faceLandmark68Net.detectLandmarks(input)
}
export function detectLandmarksTiny(
input: TNetInput
): Promise<FaceLandmarks68 | FaceLandmarks68[]> {
return nets.faceLandmark68TinyNet.detectLandmarks(input)
}
export function computeFaceDescriptor(
input: TNetInput
): Promise<Float32Array | Float32Array[]> {
return nets.faceRecognitionNet.computeFaceDescriptor(input)
}
export function mtcnn(
input: TNetInput,
forwardParams: MtcnnForwardParams
): Promise<MtcnnResult[]> {
return nets.mtcnn.forward(input, forwardParams)
}
export function tinyYolov2(
input: TNetInput,
forwardParams: TinyYolov2Types.TinyYolov2ForwardParams
): Promise<FaceDetection[]> {
return nets.tinyYolov2.locateFaces(input, forwardParams)
}
export type allFacesSsdMobilenetv1Function = (
input: tf.Tensor | NetInput | TNetInput,
minConfidence?: number,
useBatchProcessing?: boolean
) => Promise<FullFaceDescription[]>
export const allFacesSsdMobilenetv1: allFacesSsdMobilenetv1Function = allFacesSsdMobilenetv1Factory(
nets.ssdMobilenetv1,
nets.faceLandmark68Net,
nets.faceRecognitionNet
)
export type allFacesTinyYolov2Function = (
input: tf.Tensor | NetInput | TNetInput,
forwardParams?: TinyYolov2Types.TinyYolov2ForwardParams,
useBatchProcessing?: boolean
) => Promise<FullFaceDescription[]>
export const allFacesTinyYolov2: allFacesTinyYolov2Function = allFacesTinyYolov2Factory(
nets.tinyYolov2,
nets.faceLandmark68Net,
nets.faceRecognitionNet
)
export type allFacesMtcnnFunction = (
input: tf.Tensor | NetInput | TNetInput,
mtcnnForwardParams?: MtcnnForwardParams,
useBatchProcessing?: boolean
) => Promise<FullFaceDescription[]>
export const allFacesMtcnn: allFacesMtcnnFunction = allFacesMtcnnFactory(
nets.mtcnn,
nets.faceRecognitionNet
)
export const allFaces = allFacesSsdMobilenetv1
export class ComposableTask<T> {
public async then(
onfulfilled: (value: T) => T | PromiseLike<T>
): Promise<T> {
return onfulfilled(await this.run())
}
public async run(): Promise<T> {
throw new Error('ComposableTask - run is not implemented')
}
}
import { TNetInput } from 'tfjs-image-recognition-base';
import { FaceDetectionWithLandmarks } from '../classes/FaceDetectionWithLandmarks';
import { FullFaceDescription } from '../classes/FullFaceDescription';
import { extractFaces } from '../dom';
import { ComposableTask } from './ComposableTask';
import { nets } from './nets';
export class ComputeFaceDescriptorsTaskBase<TReturn, DetectFaceLandmarksReturnType> extends ComposableTask<TReturn> {
constructor(
protected detectFaceLandmarksTask: ComposableTask<DetectFaceLandmarksReturnType> | Promise<DetectFaceLandmarksReturnType>,
protected input: TNetInput
) {
super()
}
}
export class ComputeAllFaceDescriptorsTask extends ComputeFaceDescriptorsTaskBase<FullFaceDescription[], FaceDetectionWithLandmarks[]> {
public async run(): Promise<FullFaceDescription[]> {
const facesWithLandmarks = await this.detectFaceLandmarksTask
const alignedFaceCanvases = await extractFaces(
this.input,
facesWithLandmarks.map(({ landmarks }) => landmarks.align())
)
return await Promise.all(facesWithLandmarks.map(async ({ detection, landmarks }, i) => {
const descriptor = await nets.faceRecognitionNet.computeFaceDescriptor(alignedFaceCanvases[i]) as Float32Array
return new FullFaceDescription(detection, landmarks, descriptor)
}))
}
}
export class ComputeSingleFaceDescriptorTask extends ComputeFaceDescriptorsTaskBase<FullFaceDescription | undefined, FaceDetectionWithLandmarks | undefined> {
public async run(): Promise<FullFaceDescription | undefined> {
const detectionWithLandmarks = await this.detectFaceLandmarksTask
if (!detectionWithLandmarks) {
return
}
const { detection, landmarks, alignedRect } = detectionWithLandmarks
const alignedFaceCanvas = (await extractFaces(this.input, [alignedRect]))[0]
const descriptor = await nets.faceRecognitionNet.computeFaceDescriptor(alignedFaceCanvas) as Float32Array
return new FullFaceDescription(detection, landmarks, descriptor)
}
}
\ No newline at end of file
import { TNetInput } from 'tfjs-image-recognition-base';
import { FaceDetection } from '../classes/FaceDetection';
import { FaceDetectionWithLandmarks } from '../classes/FaceDetectionWithLandmarks';
import { FaceLandmarks68 } from '../classes/FaceLandmarks68';
import { extractFaces } from '../dom';
import { FaceLandmark68Net } from '../faceLandmarkNet/FaceLandmark68Net';
import { FaceLandmark68TinyNet } from '../faceLandmarkNet/FaceLandmark68TinyNet';
import { ComposableTask } from './ComposableTask';
import { ComputeAllFaceDescriptorsTask, ComputeSingleFaceDescriptorTask } from './ComputeFaceDescriptorsTasks';
import { nets } from './nets';
export class DetectFaceLandmarksTaskBase<ReturnType, DetectFacesReturnType> extends ComposableTask<ReturnType> {
constructor(
protected detectFacesTask: ComposableTask<DetectFacesReturnType> | Promise<DetectFacesReturnType>,
protected input: TNetInput,
protected useTinyLandmarkNet: boolean
) {
super()
}
protected get landmarkNet(): FaceLandmark68Net | FaceLandmark68TinyNet {
return this.useTinyLandmarkNet
? nets.faceLandmark68TinyNet
: nets.faceLandmark68Net
}
}
export class DetectAllFaceLandmarksTask extends DetectFaceLandmarksTaskBase<FaceDetectionWithLandmarks[], FaceDetection[]> {
public async run(): Promise<FaceDetectionWithLandmarks[]> {
const detections = await this.detectFacesTask
const faceCanvases = await extractFaces(this.input, detections)
const faceLandmarksByFace = await Promise.all(faceCanvases.map(
canvas => this.landmarkNet.detectLandmarks(canvas)
)) as FaceLandmarks68[]
return detections.map((detection, i) =>
new FaceDetectionWithLandmarks(detection, faceLandmarksByFace[i])
)
}
withFaceDescriptors(): ComputeAllFaceDescriptorsTask {
return new ComputeAllFaceDescriptorsTask(this, this.input)
}
}
export class DetectSingleFaceLandmarksTask extends DetectFaceLandmarksTaskBase<FaceDetectionWithLandmarks | undefined, FaceDetection | undefined> {
public async run(): Promise<FaceDetectionWithLandmarks | undefined> {
const detection = await this.detectFacesTask
if (!detection) {
return
}
const faceCanvas = (await extractFaces(this.input, [detection]))[0]
return new FaceDetectionWithLandmarks(
detection,
await this.landmarkNet.detectLandmarks(faceCanvas) as FaceLandmarks68
)
}
withFaceDescriptor(): ComputeSingleFaceDescriptorTask {
return new ComputeSingleFaceDescriptorTask(this, this.input)
}
}
\ No newline at end of file
import { TNetInput } from 'tfjs-image-recognition-base';
import { TinyYolov2Options } from 'tfjs-tiny-yolov2';
import { FaceDetection } from '../classes/FaceDetection';
import { MtcnnOptions } from '../mtcnn/MtcnnOptions';
import { SsdMobilenetv1Options } from '../ssdMobilenetv1/SsdMobilenetv1Options';
import { TinyFaceDetectorOptions } from '../tinyFaceDetector/TinyFaceDetectorOptions';
import { ComposableTask } from './ComposableTask';
import { DetectAllFaceLandmarksTask, DetectSingleFaceLandmarksTask } from './DetectFaceLandmarksTasks';
import { nets } from './nets';
import { FaceDetectionOptions } from './types';
export function detectSingleFace(
input: TNetInput,
options: FaceDetectionOptions = new SsdMobilenetv1Options()
): DetectSingleFaceTask {
return new DetectSingleFaceTask(input, options)
}
export function detectAllFaces(
input: TNetInput,
options: FaceDetectionOptions = new SsdMobilenetv1Options()
): DetectAllFacesTask {
return new DetectAllFacesTask(input, options)
}
export class DetectFacesTaskBase<TReturn> extends ComposableTask<TReturn> {
constructor(
protected input: TNetInput,
protected options: FaceDetectionOptions = new SsdMobilenetv1Options()
) {
super()
}
}
export class DetectAllFacesTask extends DetectFacesTaskBase<FaceDetection[]> {
public async run(): Promise<FaceDetection[]> {
const { input, options } = this
if (options instanceof MtcnnOptions) {
return (await nets.mtcnn.forward(input, options))
.map(result => result.faceDetection)
}
const faceDetectionFunction = options instanceof TinyFaceDetectorOptions
? (input: TNetInput) => nets.tinyFaceDetector.locateFaces(input, options)
: (
options instanceof SsdMobilenetv1Options
? (input: TNetInput) => nets.ssdMobilenetv1.locateFaces(input, options)
: (
options instanceof TinyYolov2Options
? (input: TNetInput) => nets.tinyYolov2.locateFaces(input, options)
: null
)
)
if (!faceDetectionFunction) {
throw new Error('detectFaces - expected options to be instance of TinyFaceDetectorOptions | SsdMobilenetv1Options | MtcnnOptions | TinyYolov2Options')
}
return faceDetectionFunction(input)
}
withFaceLandmarks(useTinyLandmarkNet: boolean = false): DetectAllFaceLandmarksTask {
return new DetectAllFaceLandmarksTask(this, this.input, useTinyLandmarkNet)
}
}
export class DetectSingleFaceTask extends DetectFacesTaskBase<FaceDetection | undefined> {
public async run(): Promise<FaceDetection | undefined> {
return (await new DetectAllFacesTask(this.input, this.options))
.sort((f1, f2) => f1.score - f2.score)[0]
}
withFaceLandmarks(useTinyLandmarkNet: boolean = false): DetectSingleFaceLandmarksTask {
return new DetectSingleFaceLandmarksTask(this, this.input, useTinyLandmarkNet)
}
}
\ No newline at end of file
import { FaceMatch } from '../classes/FaceMatch';
import { FullFaceDescription } from '../classes/FullFaceDescription';
import { LabeledFaceDescriptors } from '../classes/LabeledFaceDescriptors';
import { euclideanDistance } from '../euclideanDistance';
export class FaceMatcher {
private _labeledDescriptors: LabeledFaceDescriptors[]
private _distanceThreshold: number
constructor(
inputs: LabeledFaceDescriptors | FullFaceDescription | Float32Array | Array<LabeledFaceDescriptors | FullFaceDescription | Float32Array>,
distanceThreshold: number = 0.6
) {
this._distanceThreshold = distanceThreshold
const inputArray = Array.isArray(inputs) ? inputs : [inputs]
if (!inputArray.length) {
throw new Error(`FaceRecognizer.constructor - expected atleast one input`)
}
let count = 1
const createUniqueLabel = () => `person ${count++}`
this._labeledDescriptors = inputArray.map((desc) => {
if (desc instanceof LabeledFaceDescriptors) {
return desc
}
if (desc instanceof FullFaceDescription) {
return new LabeledFaceDescriptors(createUniqueLabel(), [desc.descriptor])
}
if (desc instanceof Float32Array) {
return new LabeledFaceDescriptors(createUniqueLabel(), [desc])
}
throw new Error(`FaceRecognizer.constructor - expected inputs to be of type LabeledFaceDescriptors | FullFaceDescription | Float32Array | Array<LabeledFaceDescriptors | FullFaceDescription | Float32Array>`)
})
}
public get labeledDescriptors(): LabeledFaceDescriptors[] { return this._labeledDescriptors }
public get distanceThreshold(): number { return this._distanceThreshold }
public computeMeanDistance(queryDescriptor: Float32Array, descriptors: Float32Array[]): number {
return descriptors
.map(d => euclideanDistance(d, queryDescriptor))
.reduce((d1, d2) => d1 + d2, 0)
/ (descriptors.length || 1)
}
public matchDescriptor(queryDescriptor: Float32Array): FaceMatch {
return this.labeledDescriptors
.map(({ descriptors, label }) => new FaceMatch(
label,
this.computeMeanDistance(queryDescriptor, descriptors)
))
.reduce((best, curr) => best.distance < curr.distance ? best : curr)
}
public findBestMatch(queryDescriptor: Float32Array): FaceMatch {
const bestMatch = this.matchDescriptor(queryDescriptor)
return bestMatch.distance < this.distanceThreshold
? bestMatch
: new FaceMatch('unknown', bestMatch.distance)
}
}
\ No newline at end of file
import { TNetInput } from 'tfjs-image-recognition-base';
import { ITinyYolov2Options, TinyYolov2Options } from 'tfjs-tiny-yolov2';
import { FullFaceDescription } from '../classes';
import { IMtcnnOptions, MtcnnOptions } from '../mtcnn/MtcnnOptions';
import { SsdMobilenetv1Options } from '../ssdMobilenetv1';
import { detectAllFaces } from './DetectFacesTasks';
// export allFaces API for backward compatibility
export async function allFacesSsdMobilenetv1(
input: TNetInput,
minConfidence?: number
): Promise<FullFaceDescription[]> {
return await detectAllFaces(input, new SsdMobilenetv1Options(minConfidence ? { minConfidence } : {}))
.withFaceLandmarks()
.withFaceDescriptors()
}
export async function allFacesTinyYolov2(
input: TNetInput,
forwardParams: ITinyYolov2Options = {}
): Promise<FullFaceDescription[]> {
return await detectAllFaces(input, new TinyYolov2Options(forwardParams))
.withFaceLandmarks()
.withFaceDescriptors()
}
export async function allFacesMtcnn(
input: TNetInput,
forwardParams: IMtcnnOptions = {}
): Promise<FullFaceDescription[]> {
return await detectAllFaces(input, new MtcnnOptions(forwardParams))
.withFaceLandmarks()
.withFaceDescriptors()
}
export const allFaces = allFacesSsdMobilenetv1
export * from './allFaces'
export * from './ComposableTask'
export * from './ComputeFaceDescriptorsTasks'
export * from './DetectFacesTasks'
export * from './DetectFaceLandmarksTasks'
export * from './FaceMatcher'
export * from './nets'
export * from './types'
import { TNetInput } from 'tfjs-image-recognition-base';
import { ITinyYolov2Options } from 'tfjs-tiny-yolov2';
import { FaceDetection } from '../classes/FaceDetection';
import { FaceDetectionWithLandmarks } from '../classes/FaceDetectionWithLandmarks';
import { FaceLandmarks5 } from '../classes/FaceLandmarks5';
import { FaceLandmarks68 } from '../classes/FaceLandmarks68';
import { FaceLandmark68Net } from '../faceLandmarkNet/FaceLandmark68Net';
import { FaceLandmark68TinyNet } from '../faceLandmarkNet/FaceLandmark68TinyNet';
import { FaceRecognitionNet } from '../faceRecognitionNet/FaceRecognitionNet';
import { Mtcnn } from '../mtcnn/Mtcnn';
import { MtcnnOptions } from '../mtcnn/MtcnnOptions';
import { SsdMobilenetv1 } from '../ssdMobilenetv1/SsdMobilenetv1';
import { SsdMobilenetv1Options } from '../ssdMobilenetv1/SsdMobilenetv1Options';
import { TinyFaceDetector } from '../tinyFaceDetector/TinyFaceDetector';
import { TinyFaceDetectorOptions } from '../tinyFaceDetector/TinyFaceDetectorOptions';
import { TinyYolov2 } from '../tinyYolov2/TinyYolov2';
export const nets = {
ssdMobilenetv1: new SsdMobilenetv1(),
tinyFaceDetector: new TinyFaceDetector(),
tinyYolov2: new TinyYolov2(),
mtcnn: new Mtcnn(),
faceLandmark68Net: new FaceLandmark68Net(),
faceLandmark68TinyNet: new FaceLandmark68TinyNet(),
faceRecognitionNet: new FaceRecognitionNet()
}
/**
* Attempts to detect all faces in an image using SSD Mobilenetv1 Network.
*
* @param input The input image.
* @param options (optional, default: see SsdMobilenetv1Options constructor for default parameters).
* @returns Bounding box of each face with score.
*/
export const ssdMobilenetv1 = (input: TNetInput, options: SsdMobilenetv1Options): Promise<FaceDetection[]> =>
nets.ssdMobilenetv1.locateFaces(input, options)
/**
* Attempts to detect all faces in an image using the Tiny Face Detector.
*
* @param input The input image.
* @param options (optional, default: see TinyFaceDetectorOptions constructor for default parameters).
* @returns Bounding box of each face with score.
*/
export const tinyFaceDetector = (input: TNetInput, options: TinyFaceDetectorOptions): Promise<FaceDetection[]> =>
nets.tinyFaceDetector.locateFaces(input, options)
/**
* Attempts to detect all faces in an image using the Tiny Yolov2 Network.
*
* @param input The input image.
* @param options (optional, default: see TinyYolov2Options constructor for default parameters).
* @returns Bounding box of each face with score.
*/
export const tinyYolov2 = (input: TNetInput, options: ITinyYolov2Options): Promise<FaceDetection[]> =>
nets.tinyYolov2.locateFaces(input, options)
/**
* Attempts to detect all faces in an image and the 5 point face landmarks
* of each detected face using the MTCNN Network.
*
* @param input The input image.
* @param options (optional, default: see MtcnnOptions constructor for default parameters).
* @returns Bounding box of each face with score and 5 point face landmarks.
*/
export const mtcnn = (input: TNetInput, options: MtcnnOptions): Promise<FaceDetectionWithLandmarks<FaceLandmarks5>[]> =>
nets.mtcnn.forward(input, options)
/**
* Detects the 68 point face landmark positions of the face shown in an image.
*
* @param inputs The face image extracted from the bounding box of a face. Can
* also be an array of input images, which will be batch processed.
* @returns 68 point face landmarks or array thereof in case of batch input.
*/
export const detectFaceLandmarks = (input: TNetInput): Promise<FaceLandmarks68 | FaceLandmarks68[]> =>
nets.faceLandmark68Net.detectLandmarks(input)
/**
* Detects the 68 point face landmark positions of the face shown in an image
* using a tinier version of the 68 point face landmark model, which is slightly
* faster at inference, but also slightly less accurate.
*
* @param inputs The face image extracted from the bounding box of a face. Can
* also be an array of input images, which will be batch processed.
* @returns 68 point face landmarks or array thereof in case of batch input.
*/
export const detectFaceLandmarksTiny = (input: TNetInput): Promise<FaceLandmarks68 | FaceLandmarks68[]> =>
nets.faceLandmark68TinyNet.detectLandmarks(input)
/**
* Computes a 128 entry vector (face descriptor / face embeddings) from the face shown in an image,
* which uniquely represents the features of that persons face. The computed face descriptor can
* be used to measure the similarity between faces, by computing the euclidean distance of two
* face descriptors.
*
* @param inputs The face image extracted from the aligned bounding box of a face. Can
* also be an array of input images, which will be batch processed.
* @returns Face descriptor with 128 entries or array thereof in case of batch input.
*/
export const computeFaceDescriptor = (input: TNetInput): Promise<Float32Array | Float32Array[]> =>
nets.faceRecognitionNet.computeFaceDescriptor(input)
export const loadSsdMobilenetv1Model = (url: string) => nets.ssdMobilenetv1.load(url)
export const loadTinyFaceDetectorModel = (url: string) => nets.tinyFaceDetector.load(url)
export const loadMtcnnModel = (url: string) => nets.mtcnn.load(url)
export const loadTinyYolov2Model = (url: string) => nets.tinyYolov2.load(url)
export const loadFaceLandmarkModel = (url: string) => nets.faceLandmark68Net.load(url)
export const loadFaceLandmarkTinyModel = (url: string) => nets.faceLandmark68TinyNet.load(url)
export const loadFaceRecognitionModel = (url: string) => nets.faceRecognitionNet.load(url)
// backward compatibility
export const loadFaceDetectionModel = loadSsdMobilenetv1Model
export const locateFaces = ssdMobilenetv1
export const detectLandmarks = detectFaceLandmarks
\ No newline at end of file
import { TNetInput } from 'tfjs-image-recognition-base';
import { TinyYolov2Options } from 'tfjs-tiny-yolov2';
import { FaceDetection } from '../classes/FaceDetection';
import { MtcnnOptions } from '../mtcnn/MtcnnOptions';
import { SsdMobilenetv1Options } from '../ssdMobilenetv1/SsdMobilenetv1Options';
import { TinyFaceDetectorOptions } from '../tinyFaceDetector/TinyFaceDetectorOptions';
export type FaceDetectionOptions = TinyFaceDetectorOptions | SsdMobilenetv1Options | MtcnnOptions | TinyYolov2Options
export type FaceDetectionFunction = (input: TNetInput) => Promise<FaceDetection[]>
\ No newline at end of file
......@@ -10,9 +10,10 @@ export * from './classes';
export * from './dom'
export * from './euclideanDistance';
export * from './faceDetectionNet';
export * from './faceLandmarkNet';
export * from './faceRecognitionNet';
export * from './globalApi';
export * from './mtcnn';
export * from './ssdMobilenetv1';
export * from './tinyFaceDetector';
export * from './tinyYolov2';
\ No newline at end of file
......@@ -2,18 +2,19 @@ import * as tf from '@tensorflow/tfjs-core';
import { NetInput, NeuralNetwork, Point, Rect, TNetInput, toNetInput } from 'tfjs-image-recognition-base';
import { FaceDetection } from '../classes/FaceDetection';
import { FaceDetectionWithLandmarks } from '../classes/FaceDetectionWithLandmarks';
import { FaceLandmarks5 } from '../classes/FaceLandmarks5';
import { bgrToRgbTensor } from './bgrToRgbTensor';
import { CELL_SIZE } from './config';
import { extractParams } from './extractParams';
import { getDefaultMtcnnForwardParams } from './getDefaultMtcnnForwardParams';
import { getSizesForScale } from './getSizesForScale';
import { loadQuantizedParams } from './loadQuantizedParams';
import { IMtcnnOptions, MtcnnOptions } from './MtcnnOptions';
import { pyramidDown } from './pyramidDown';
import { stage1 } from './stage1';
import { stage2 } from './stage2';
import { stage3 } from './stage3';
import { MtcnnForwardParams, MtcnnResult, NetParams } from './types';
import { NetParams } from './types';
export class Mtcnn extends NeuralNetwork<NetParams> {
......@@ -23,8 +24,8 @@ export class Mtcnn extends NeuralNetwork<NetParams> {
public async forwardInput(
input: NetInput,
forwardParams: MtcnnForwardParams = {}
): Promise<{ results: MtcnnResult[], stats: any }> {
forwardParams: IMtcnnOptions = {}
): Promise<{ results: FaceDetectionWithLandmarks<FaceLandmarks5>[], stats: any }> {
const { params } = this
......@@ -63,7 +64,7 @@ export class Mtcnn extends NeuralNetwork<NetParams> {
maxNumScales,
scoreThresholds,
scaleSteps
} = Object.assign({}, getDefaultMtcnnForwardParams(), forwardParams)
} = new MtcnnOptions(forwardParams)
const scales = (scaleSteps || pyramidDown(minFaceSize, scaleFactor, [height, width]))
.filter(scale => {
......@@ -100,8 +101,8 @@ export class Mtcnn extends NeuralNetwork<NetParams> {
const out3 = await stage3(inputCanvas, out2.boxes, scoreThresholds[2], params.onet, stats)
stats.total_stage3 = Date.now() - ts
const results = out3.boxes.map((box, idx) => ({
faceDetection: new FaceDetection(
const results = out3.boxes.map((box, idx) => new FaceDetectionWithLandmarks<FaceLandmarks5>(
new FaceDetection(
out3.scores[idx],
new Rect(
box.left / width,
......@@ -114,19 +115,19 @@ export class Mtcnn extends NeuralNetwork<NetParams> {
width
}
),
faceLandmarks: new FaceLandmarks5(
out3.points[idx].map(pt => pt.div(new Point(width, height))),
{ width, height }
new FaceLandmarks5(
out3.points[idx].map(pt => pt.sub(new Point(box.left, box.top)).div(new Point(box.width, box.height))),
{ width: box.width, height: box.height }
)
}))
))
return onReturn({ results, stats })
}
public async forward(
input: TNetInput,
forwardParams: MtcnnForwardParams = {}
): Promise<MtcnnResult[]> {
forwardParams: IMtcnnOptions = {}
): Promise<FaceDetectionWithLandmarks<FaceLandmarks5>[]> {
return (
await this.forwardInput(
await toNetInput(input),
......@@ -137,8 +138,8 @@ export class Mtcnn extends NeuralNetwork<NetParams> {
public async forwardWithStats(
input: TNetInput,
forwardParams: MtcnnForwardParams = {}
): Promise<{ results: MtcnnResult[], stats: any }> {
forwardParams: IMtcnnOptions = {}
): Promise<{ results: FaceDetectionWithLandmarks<FaceLandmarks5>[], stats: any }> {
return this.forwardInput(
await toNetInput(input),
forwardParams
......
export interface IMtcnnOptions {
minFaceSize?: number
scaleFactor?: number
maxNumScales?: number
scoreThresholds?: number[]
scaleSteps?: number[]
}
export class MtcnnOptions {
protected _name: string = 'MtcnnOptions'
private _minFaceSize: number
private _scaleFactor: number
private _maxNumScales: number
private _scoreThresholds: number[]
private _scaleSteps: number[] | undefined
constructor({ minFaceSize, scaleFactor, maxNumScales, scoreThresholds, scaleSteps }: IMtcnnOptions = {}) {
this._minFaceSize = minFaceSize || 20
this._scaleFactor = scaleFactor || 0.709
this._maxNumScales = maxNumScales || 10
this._scoreThresholds = scoreThresholds || [0.6, 0.7, 0.7]
this._scaleSteps = scaleSteps
if (typeof this._minFaceSize !== 'number' || this._minFaceSize < 0) {
throw new Error(`${this._name} - expected minFaceSize to be a number > 0`)
}
if (typeof this._scaleFactor !== 'number' || this._scaleFactor <= 0 || this._scaleFactor >= 1) {
throw new Error(`${this._name} - expected scaleFactor to be a number between 0 and 1`)
}
if (typeof this._maxNumScales !== 'number' || this._maxNumScales < 0) {
throw new Error(`${this._name} - expected maxNumScales to be a number > 0`)
}
if (
!Array.isArray(this._scoreThresholds)
|| this._scoreThresholds.length !== 3
|| this._scoreThresholds.some(th => typeof th !== 'number')
) {
throw new Error(`${this._name} - expected scoreThresholds to be an array of numbers of length 3`)
}
if (
this._scaleSteps
&& (!Array.isArray(this._scaleSteps) || this._scaleSteps.some(th => typeof th !== 'number'))
) {
throw new Error(`${this._name} - expected scaleSteps to be an array of numbers`)
}
}
get minFaceSize(): number { return this._minFaceSize }
get scaleFactor(): number { return this._scaleFactor }
get maxNumScales(): number { return this._maxNumScales }
get scoreThresholds(): number[] { return this._scoreThresholds }
get scaleSteps(): number[] | undefined { return this._scaleSteps }
}
\ No newline at end of file
import * as tf from '@tensorflow/tfjs-core';
import { Box, createCanvas, Dimensions, getContext2dOrThrow } from 'tfjs-image-recognition-base';
import { Box, createCanvas, getContext2dOrThrow, IDimensions } from 'tfjs-image-recognition-base';
import { normalize } from './normalize';
export async function extractImagePatches(
img: HTMLCanvasElement,
boxes: Box[],
{ width, height }: Dimensions
{ width, height }: IDimensions
): Promise<tf.Tensor4D[]> {
......
export function getDefaultMtcnnForwardParams() {
return {
minFaceSize: 20,
scaleFactor: 0.709,
maxNumScales: 10,
scoreThresholds: [0.6, 0.7, 0.7]
}
}
\ No newline at end of file
import { Mtcnn } from './Mtcnn';
export * from './Mtcnn';
export * from './MtcnnOptions';
export function createMtcnn(weights: Float32Array) {
const net = new Mtcnn()
......
......@@ -40,16 +40,3 @@ export type NetParams = {
rnet: RNetParams
onet: ONetParams
}
export type MtcnnResult = {
faceDetection: FaceDetection,
faceLandmarks: FaceLandmarks5
}
export type MtcnnForwardParams = {
minFaceSize?: number
scaleFactor?: number
maxNumScales?: number
scoreThresholds?: number[]
scaleSteps?: number[]
}
......@@ -8,12 +8,14 @@ import { mobileNetV1 } from './mobileNetV1';
import { nonMaxSuppression } from './nonMaxSuppression';
import { outputLayer } from './outputLayer';
import { predictionLayer } from './predictionLayer';
import { ISsdMobilenetv1Options, SsdMobilenetv1Options } from './SsdMobilenetv1Options';
import { NetParams } from './types';
export class FaceDetectionNet extends NeuralNetwork<NetParams> {
export class SsdMobilenetv1 extends NeuralNetwork<NetParams> {
constructor() {
super('FaceDetectionNet')
super('SsdMobilenetv1')
}
public forwardInput(input: NetInput) {
......@@ -21,7 +23,7 @@ export class FaceDetectionNet extends NeuralNetwork<NetParams> {
const { params } = this
if (!params) {
throw new Error('FaceDetectionNet - load model before inference')
throw new Error('SsdMobilenetv1 - load model before inference')
}
return tf.tidy(() => {
......@@ -45,10 +47,11 @@ export class FaceDetectionNet extends NeuralNetwork<NetParams> {
public async locateFaces(
input: TNetInput,
minConfidence: number = 0.8,
maxResults: number = 100
options: ISsdMobilenetv1Options = {}
): Promise<FaceDetection[]> {
const { maxResults, minConfidence } = new SsdMobilenetv1Options(options)
const netInput = await toNetInput(input)
const {
......
export interface ISsdMobilenetv1Options {
minConfidence?: number
maxResults?: number
}
export class SsdMobilenetv1Options {
protected _name: string = 'SsdMobilenetv1Options'
private _minConfidence: number
private _maxResults: number
constructor({ minConfidence, maxResults }: ISsdMobilenetv1Options = {}) {
this._minConfidence = minConfidence || 0.5
this._maxResults = maxResults || 100
if (typeof this._minConfidence !== 'number' || this._minConfidence <= 0 || this._minConfidence >= 1) {
throw new Error(`${this._name} - expected minConfidence to be a number between 0 and 1`)
}
if (typeof this._maxResults !== 'number') {
throw new Error(`${this._name} - expected maxResults to be a number`)
}
}
get minConfidence(): number { return this._minConfidence }
get maxResults(): number { return this._maxResults }
}
\ No newline at end of file
import { SsdMobilenetv1 } from './SsdMobilenetv1';
export * from './SsdMobilenetv1';
export * from './SsdMobilenetv1Options';
export function createSsdMobilenetv1(weights: Float32Array) {
const net = new SsdMobilenetv1()
net.extractWeights(weights)
return net
}
export function createFaceDetectionNet(weights: Float32Array) {
return createSsdMobilenetv1(weights)
}
// alias for backward compatibily
export class FaceDetectionNet extends SsdMobilenetv1 {}
\ No newline at end of file
import { Point, TNetInput } from 'tfjs-image-recognition-base';
import { TinyYolov2 as TinyYolov2Base, ITinyYolov2Options } from 'tfjs-tiny-yolov2';
import { FaceDetection } from '../classes';
import { BOX_ANCHORS, DEFAULT_MODEL_NAME, IOU_THRESHOLD, MEAN_RGB } from './const';
export class TinyFaceDetector extends TinyYolov2Base {
constructor() {
const config = {
withSeparableConvs: true,
iouThreshold: IOU_THRESHOLD,
classes: ['face'],
anchors: BOX_ANCHORS,
meanRgb: MEAN_RGB,
isFirstLayerConv2d: true,
filterSizes: [3, 16, 32, 64, 128, 256, 512]
}
super(config)
}
public get anchors(): Point[] {
return this.config.anchors
}
public async locateFaces(input: TNetInput, forwardParams: ITinyYolov2Options): Promise<FaceDetection[]> {
const objectDetections = await this.detect(input, forwardParams)
return objectDetections.map(det => new FaceDetection(det.score, det.relativeBox, { width: det.imageWidth, height: det.imageHeight }))
}
protected loadQuantizedParams(modelUri: string | undefined) {
const defaultModelName = DEFAULT_MODEL_NAME
return super.loadQuantizedParams(modelUri, defaultModelName) as any
}
}
\ No newline at end of file
import { ITinyYolov2Options, TinyYolov2Options } from 'tfjs-tiny-yolov2';
export interface ITinyFaceDetectorOptions extends ITinyYolov2Options {}
export class TinyFaceDetectorOptions extends TinyYolov2Options {
protected _name: string = 'TinyFaceDetectorOptions'
}
\ No newline at end of file
import { Point } from 'tfjs-image-recognition-base';
export const IOU_THRESHOLD = 0.4
export const BOX_ANCHORS = [
new Point(1.603231, 2.094468),
new Point(6.041143, 7.080126),
new Point(2.882459, 3.518061),
new Point(4.266906, 5.178857),
new Point(9.041765, 10.66308)
]
export const MEAN_RGB: [number, number, number] = [117.001, 114.697, 97.404]
export const DEFAULT_MODEL_NAME = 'tiny_face_detector_model'
\ No newline at end of file
import { TinyFaceDetector } from './TinyFaceDetector';
export * from './TinyFaceDetector';
export * from './TinyFaceDetectorOptions';
export function createTinyFaceDetector(weights: Float32Array) {
const net = new TinyFaceDetector()
net.extractWeights(weights)
return net
}
\ No newline at end of file
import { Point, TNetInput } from 'tfjs-image-recognition-base';
import { TinyYolov2 as TinyYolov2Base, TinyYolov2Types } from 'tfjs-tiny-yolov2';
import { ITinyYolov2Options, TinyYolov2 as TinyYolov2Base } from 'tfjs-tiny-yolov2';
import { FaceDetection } from '../classes';
import {
......@@ -41,7 +41,7 @@ export class TinyYolov2 extends TinyYolov2Base {
return this.config.anchors
}
public async locateFaces(input: TNetInput, forwardParams: TinyYolov2Types.TinyYolov2ForwardParams): Promise<FaceDetection[]> {
public async locateFaces(input: TNetInput, forwardParams: ITinyYolov2Options): Promise<FaceDetection[]> {
const objectDetections = await this.detect(input, forwardParams)
return objectDetections.map(det => new FaceDetection(det.score, det.relativeBox, { width: det.imageWidth, height: det.imageHeight }))
}
......
[[{"x":117.85171800851822,"y":58.91067498922348},{"x":157.70139408111572,"y":64.48519098758698},{"x":142.3133249282837,"y":88.54253697395325},{"x":110.16610914468765,"y":99.86233913898468},{"x":149.25052666664124,"y":106.37608766555786}], [{"x":260.46802616119385,"y":82.86598587036133},{"x":305.55760955810547,"y":83.54110813140869},{"x":281.4357223510742,"y":113.98349380493164},{"x":257.06039476394653,"y":125.50608730316164},{"x":306.0191822052002,"y":127.20984458923341}], [{"x":82.91613873839378,"y":292.6100924015045},{"x":133.91112035512924,"y":304.814593821764},{"x":104.43486452102661,"y":330.3951778411865},{"x":72.6984107196331,"y":342.633121073246},{"x":120.51901644468307,"y":354.2677878141403}], [{"x":278.20400857925415,"y":273.83238887786865},{"x":318.7582621574402,"y":273.39686036109924},{"x":295.54277753829956,"y":300.43398427963257},{"x":279.5109224319458,"y":311.497838973999},{"x":317.0187101364136,"y":313.05305886268616}], [{"x":489.58824399113655,"y":224.56882098317146},{"x":534.514480471611,"y":223.28146517276764},{"x":507.2082565128803,"y":250.17186474800113},{"x":493.0139665305615,"y":271.0716395378113},{"x":530.7517347931862,"y":270.4143014550209}], [{"x":606.397784024477,"y":105.43332603573799},{"x":645.2468676567078,"y":111.50095802545547},{"x":625.1735819578171,"y":133.40740483999252},{"x":598.8033188581467,"y":141.26283955574036},{"x":637.2144679427147,"y":147.32198816537857}]]
\ No newline at end of file
[[{"x":117.85171800851822,"y":58.91067159175873},{"x":157.70139408111572,"y":64.48519098758698},{"x":142.3133249282837,"y":88.54254376888275},{"x":110.1661057472229,"y":99.86233913898468},{"x":149.25052666664124,"y":106.37608766555786}],[{"x":82.91613873839378,"y":292.6100924015045},{"x":133.91112035512924,"y":304.814593821764},{"x":104.43486452102661,"y":330.3951778411865},{"x":72.6984107196331,"y":342.63312900066376},{"x":120.51901644468307,"y":354.2677878141403}],[{"x":278.20400857925415,"y":273.8323953151703},{"x":318.7582621574402,"y":273.39686357975006},{"x":295.5427807569504,"y":300.43398427963257},{"x":279.5109224319458,"y":311.497838973999},{"x":317.0187101364136,"y":313.05305886268616}],[{"x":260.46802616119385,"y":82.86598253250122},{"x":305.55760955810547,"y":83.54110813140869},{"x":281.43571567535395,"y":113.98349380493164},{"x":257.0603914260864,"y":125.50608730316162},{"x":306.01917552948,"y":127.2098445892334}],[{"x":489.5882513225079,"y":224.56882098317146},{"x":534.514480471611,"y":223.28146517276764},{"x":507.20826017856604,"y":250.1718647480011},{"x":493.0139665305615,"y":271.0716395378113},{"x":530.7517347931862,"y":270.4143014550209}],[{"x":606.397784024477,"y":105.43332290649414},{"x":645.2468676567078,"y":111.50095802545547},{"x":625.1735819578171,"y":133.40740483999252},{"x":598.8033188581467,"y":141.26284581422806},{"x":637.2144679427147,"y":147.32198816537857}]]
\ No newline at end of file
import { IRect } from '../src';
import { FaceDetection } from '../src/classes/FaceDetection';
import { expectRectClose, sortFaceDetections } from './utils';
export function expectFaceDetections(
results: FaceDetection[],
allExpectedFaceDetections: IRect[],
expectedScores: number[],
maxBoxDelta: number
) {
const expectedDetections = expectedScores
.map((score, i) => ({
score,
...allExpectedFaceDetections[i]
}))
.filter(expected => expected.score !== -1)
const sortedResults = sortFaceDetections(results)
expectedDetections.forEach((expectedDetection, i) => {
const det = sortedResults[i]
expect(det.score).toBeCloseTo(expectedDetection.score, 2)
expectRectClose(det.box, expectedDetection, maxBoxDelta)
})
}
\ No newline at end of file
import { FaceDetectionWithLandmarks } from '../src/classes/FaceDetectionWithLandmarks';
import { FaceLandmarks } from '../src/classes/FaceLandmarks';
import { FaceLandmarks68 } from '../src/classes/FaceLandmarks68';
import { ExpectedFaceDetectionWithLandmarks, expectPointClose, expectRectClose, sortByFaceDetection } from './utils';
export type BoxAndLandmarksDeltas = {
maxBoxDelta: number
maxLandmarksDelta: number
}
export function expectFaceDetectionsWithLandmarks<TFaceLandmarks extends FaceLandmarks = FaceLandmarks68>(
results: FaceDetectionWithLandmarks<TFaceLandmarks>[],
allExpectedFullFaceDescriptions: ExpectedFaceDetectionWithLandmarks[],
expectedScores: number[],
deltas: BoxAndLandmarksDeltas
) {
const expectedFullFaceDescriptions = expectedScores
.map((score, i) => ({
score,
...allExpectedFullFaceDescriptions[i]
}))
.filter(expected => expected.score !== -1)
const sortedResults = sortByFaceDetection(results)
expectedFullFaceDescriptions.forEach((expected, i) => {
const { detection, landmarks } = sortedResults[i]
expect(detection.score).toBeCloseTo(expected.score, 2)
expectRectClose(detection.box, expected.detection, deltas.maxBoxDelta)
landmarks.positions.forEach((pt, j) => expectPointClose(pt, expected.landmarks[j], deltas.maxLandmarksDelta))
})
}
\ No newline at end of file
import { FullFaceDescription } from '../src/classes/FullFaceDescription';
import { euclideanDistance } from '../src/euclideanDistance';
import { BoxAndLandmarksDeltas } from './expectFaceDetectionsWithLandmarks';
import { ExpectedFullFaceDescription, expectPointClose, expectRectClose, sortByFaceDetection } from './utils';
export type FullFaceDescriptionDeltas = BoxAndLandmarksDeltas & {
maxDescriptorDelta: number
}
export function expectFullFaceDescriptions(
results: FullFaceDescription[],
allExpectedFullFaceDescriptions: ExpectedFullFaceDescription[],
expectedScores: number[],
deltas: FullFaceDescriptionDeltas
) {
const expectedFullFaceDescriptions = expectedScores
.map((score, i) => ({
score,
...allExpectedFullFaceDescriptions[i]
}))
.filter(expected => expected.score !== -1)
const sortedResults = sortByFaceDetection(results)
expectedFullFaceDescriptions.forEach((expected, i) => {
const { detection, landmarks, descriptor } = sortedResults[i]
expect(detection.score).toBeCloseTo(expected.score, 2)
expectRectClose(detection.box, expected.detection, deltas.maxBoxDelta)
landmarks.positions.forEach((pt, j) => expectPointClose(pt, expected.landmarks[j], deltas.maxLandmarksDelta))
expect(euclideanDistance(descriptor, expected.descriptor)).toBeLessThan(deltas.maxDescriptorDelta)
})
}
\ No newline at end of file
import { bufferToImage } from 'tfjs-image-recognition-base';
import {
assembleExpectedFullFaceDescriptions,
describeWithNets,
expectAllTensorsReleased,
ExpectedFullFaceDescription,
} from '../../utils';
import { expectAllFacesResults, expectedMtcnnBoxes } from './expectedResults';
describe('allFacesMtcnn', () => {
let imgEl: HTMLImageElement
let expectedFullFaceDescriptions: ExpectedFullFaceDescription[]
beforeAll(async () => {
const img = await (await fetch('base/test/images/faces.jpg')).blob()
imgEl = await bufferToImage(img)
expectedFullFaceDescriptions = await assembleExpectedFullFaceDescriptions(expectedMtcnnBoxes, 'mtcnnFaceLandmarkPositions.json')
})
describeWithNets('computes full face descriptions', { withAllFacesMtcnn: true }, ({ allFacesMtcnn }) => {
it('minFaceSize = 20', async () => {
const forwardParams = {
minFaceSize: 20
}
const results = await allFacesMtcnn(imgEl, forwardParams)
expect(results.length).toEqual(6)
const expectedScores = [1, 1, 1, 1, 0.99, 0.99]
const deltas = {
maxBoxDelta: 2,
maxLandmarksDelta: 1,
maxDescriptorDelta: 0.4
}
expectAllFacesResults(results, expectedFullFaceDescriptions, expectedScores, deltas)
})
})
describeWithNets('no memory leaks', { withAllFacesMtcnn: true }, ({ allFacesMtcnn }) => {
it('single image element', async () => {
await expectAllTensorsReleased(async () => {
await allFacesMtcnn(imgEl)
})
})
})
})
\ No newline at end of file
import * as tf from '@tensorflow/tfjs-core';
import { bufferToImage } from '../../../src';
import {
assembleExpectedFullFaceDescriptions,
describeWithNets,
expectAllTensorsReleased,
ExpectedFullFaceDescription,
} from '../../utils';
import { expectAllFacesResults, expectedSsdBoxes } from './expectedResults';
describe('allFacesSsdMobilenetv1', () => {
let imgEl: HTMLImageElement
let expectedFullFaceDescriptions: ExpectedFullFaceDescription[]
beforeAll(async () => {
const img = await (await fetch('base/test/images/faces.jpg')).blob()
imgEl = await bufferToImage(img)
expectedFullFaceDescriptions = await assembleExpectedFullFaceDescriptions(expectedSsdBoxes)
})
describeWithNets('computes full face descriptions', { withAllFacesSsdMobilenetv1: true }, ({ allFacesSsdMobilenetv1 }) => {
it('scores > 0.8', async () => {
const results = await allFacesSsdMobilenetv1(imgEl, 0.8)
expect(results.length).toEqual(4)
const expectedScores = [-1, 0.81, 0.97, 0.88, 0.84, -1]
const deltas = {
maxBoxDelta: 5,
maxLandmarksDelta: 4,
maxDescriptorDelta: 0.01
}
expectAllFacesResults(results, expectedFullFaceDescriptions, expectedScores, deltas)
})
it('scores > 0.5', async () => {
const results = await allFacesSsdMobilenetv1(imgEl, 0.5)
expect(results.length).toEqual(6)
const expectedScores = [0.54, 0.81, 0.97, 0.88, 0.84, 0.61]
const deltas = {
maxBoxDelta: 5,
maxLandmarksDelta: 4,
maxDescriptorDelta: 0.01
}
expectAllFacesResults(results, expectedFullFaceDescriptions, expectedScores, deltas)
})
})
describeWithNets('no memory leaks', { withAllFacesSsdMobilenetv1: true }, ({ allFacesSsdMobilenetv1 }) => {
it('single image element', async () => {
await expectAllTensorsReleased(async () => {
await allFacesSsdMobilenetv1(imgEl)
})
})
it('single tf.Tensor3D', async () => {
const tensor = tf.fromPixels(imgEl)
await expectAllTensorsReleased(async () => {
await allFacesSsdMobilenetv1(tensor)
})
tensor.dispose()
})
it('single batch size 1 tf.Tensor4Ds', async () => {
const tensor = tf.tidy(() => tf.fromPixels(imgEl).expandDims()) as tf.Tensor4D
await expectAllTensorsReleased(async () => {
await allFacesSsdMobilenetv1(tensor)
})
tensor.dispose()
})
})
})
\ No newline at end of file
import * as tf from '@tensorflow/tfjs-core';
import { TinyYolov2Types } from 'tfjs-tiny-yolov2';
import { bufferToImage } from '../../../src';
import {
assembleExpectedFullFaceDescriptions,
describeWithNets,
expectAllTensorsReleased,
ExpectedFullFaceDescription,
} from '../../utils';
import { expectAllFacesResults, expectedTinyYolov2Boxes } from './expectedResults';
describe('allFacesTinyYolov2', () => {
let imgEl: HTMLImageElement
let expectedFullFaceDescriptions: ExpectedFullFaceDescription[]
beforeAll(async () => {
const img = await (await fetch('base/test/images/faces.jpg')).blob()
imgEl = await bufferToImage(img)
expectedFullFaceDescriptions = await assembleExpectedFullFaceDescriptions(expectedTinyYolov2Boxes)
})
describeWithNets('computes full face descriptions', { withAllFacesTinyYolov2: true }, ({ allFacesTinyYolov2 }) => {
it('TinyYolov2Types.SizeType.LG', async () => {
const results = await allFacesTinyYolov2(imgEl, { inputSize: TinyYolov2Types.SizeType.LG })
expect(results.length).toEqual(6)
const expectedScores = [0.85, 0.88, 0.9, 0.86, 0.9, 0.85]
const deltas = {
maxBoxDelta: 25,
maxLandmarksDelta: 10,
maxDescriptorDelta: 0.24
}
expectAllFacesResults(results, expectedFullFaceDescriptions, expectedScores, deltas)
})
it('TinyYolov2Types.SizeType.MD', async () => {
const results = await allFacesTinyYolov2(imgEl, { inputSize: TinyYolov2Types.SizeType.MD })
expect(results.length).toEqual(6)
const expectedScores = [0.85, 0.8, 0.8, 0.85, 0.85, 0.82]
const deltas = {
maxBoxDelta: 34,
maxLandmarksDelta: 18,
maxDescriptorDelta: 0.2
}
expectAllFacesResults(results, expectedFullFaceDescriptions, expectedScores, deltas)
})
})
describeWithNets('no memory leaks', { withAllFacesTinyYolov2: true }, ({ allFacesTinyYolov2 }) => {
it('single image element', async () => {
await expectAllTensorsReleased(async () => {
await allFacesTinyYolov2(imgEl)
})
})
it('single tf.Tensor3D', async () => {
const tensor = tf.fromPixels(imgEl)
await expectAllTensorsReleased(async () => {
await allFacesTinyYolov2(tensor)
})
tensor.dispose()
})
it('single batch size 1 tf.Tensor4Ds', async () => {
const tensor = tf.tidy(() => tf.fromPixels(imgEl).expandDims()) as tf.Tensor4D
await expectAllTensorsReleased(async () => {
await allFacesTinyYolov2(tensor)
})
tensor.dispose()
})
})
})
\ No newline at end of file
import * as faceapi from '../../../src';
import { FullFaceDescription } from '../../../src/classes/FullFaceDescription';
import { euclideanDistance } from '../../../src/euclideanDistance';
import {
ExpectedFullFaceDescription,
expectMaxDelta,
expectPointClose,
expectRectClose,
sortBoxes,
sortByDistanceToOrigin,
sortFullFaceDescriptions,
} from '../../utils';
import { IPoint, IRect } from '../../../src';
import { FaceDetection } from '../../../src/classes/FaceDetection';
import { sortFaceDetections } from '../../utils';
export type BoxAndLandmarksDeltas = {
maxBoxDelta: number
maxLandmarksDelta: number
}
export type AllFacesDeltas = BoxAndLandmarksDeltas & {
maxDescriptorDelta: number
}
export const expectedSsdBoxes = sortBoxes([
{ x: 48, y: 253, width: 104, height: 129 },
{ x: 260, y: 227, width: 76, height: 117 },
{ x: 466, y: 165, width: 88, height: 130 },
{ x: 234, y: 36, width: 84, height: 119 },
{ x: 577, y: 65, width: 84, height: 105 },
{ x: 84, y: 14, width: 79, height: 132 }
])
export const expectedTinyYolov2Boxes = sortBoxes([
{ x: 52, y: 263, width: 106, height: 102 },
{ x: 455, y: 191, width: 103, height: 97 },
{ x: 236, y: 57, width: 90, height: 85 },
{ x: 257, y: 243, width: 86, height: 95 },
{ x: 578, y: 76, width: 86, height: 91 },
{ x: 87, y: 30, width: 92, height: 93 }
])
export const expectedTinyYolov2SeparableConvBoxes = sortBoxes([
{ x: 42, y: 257, width: 111, height: 121 },
{ x: 454, y: 175, width: 104, height: 121 },
{ x: 230, y: 45, width: 94, height: 104 },
{ x: 574, y: 62, width: 88, height: 113 },
{ x: 260, y: 233, width: 82, height: 104 },
{ x: 83, y: 24, width: 85, height: 111 }
])
export const expectedMtcnnBoxes = sortBoxes([
{ x: 70, y: 21, width: 112, height: 112 },
{ x: 36, y: 250, width: 133, height: 132 },
{ x: 221, y: 43, width: 112, height: 111 },
{ x: 247, y: 231, width: 106, height: 107 },
{ x: 566, y: 67, width: 104, height: 104 },
{ x: 451, y: 176, width: 122, height: 122 }
])
export function expectMtcnnResults(
results: { faceDetection: faceapi.FaceDetection, faceLandmarks: faceapi.FaceLandmarks5 }[],
expectedMtcnnFaceLandmarks: IPoint[][],
deltas: BoxAndLandmarksDeltas
) {
sortByDistanceToOrigin(results, res => res.faceDetection.box).forEach((result, i) => {
const { faceDetection, faceLandmarks } = result
expect(faceDetection instanceof faceapi.FaceDetection).toBe(true)
expect(faceLandmarks instanceof faceapi.FaceLandmarks5).toBe(true)
expectRectClose(faceDetection.getBox(), expectedMtcnnBoxes[i], deltas.maxBoxDelta)
faceLandmarks.getPositions().forEach((pt, j) => expectPointClose(pt, expectedMtcnnFaceLandmarks[i][j], deltas.maxLandmarksDelta))
expectMaxDelta(faceDetection.getScore(), 0.99, 0.01)
})
}
export function expectDetectionResults(results: FaceDetection[], allExpectedFaceDetections: IRect[], expectedScores: number[], maxBoxDelta: number) {
const expectedDetections = expectedScores
.map((score, i) => ({
score,
...allExpectedFaceDetections[i]
}))
.filter(expected => expected.score !== -1)
const sortedResults = sortFaceDetections(results)
expectedDetections.forEach((expectedDetection, i) => {
const det = sortedResults[i]
expect(det.score).toBeCloseTo(expectedDetection.score, 2)
expectRectClose(det.box, expectedDetection, maxBoxDelta)
})
}
export function expectAllFacesResults(results: FullFaceDescription[], allExpectedFullFaceDescriptions: ExpectedFullFaceDescription[], expectedScores: number[], deltas: AllFacesDeltas) {
const expectedFullFaceDescriptions = expectedScores
.map((score, i) => ({
score,
...allExpectedFullFaceDescriptions[i]
}))
.filter(expected => expected.score !== -1)
const sortedResults = sortFullFaceDescriptions(results)
expectedFullFaceDescriptions.forEach((expected, i) => {
const { detection, landmarks, descriptor } = sortedResults[i]
expect(detection.score).toBeCloseTo(expected.score, 2)
expectRectClose(detection.box, expected.detection, deltas.maxBoxDelta)
landmarks.getPositions().forEach((pt, j) => expectPointClose(pt, expected.landmarks[j], deltas.maxLandmarksDelta))
expect(euclideanDistance(descriptor, expected.descriptor)).toBeLessThan(deltas.maxDescriptorDelta)
})
}
\ No newline at end of file
import * as faceapi from '../../../src';
import { describeWithNets, expectAllTensorsReleased, expectRectClose } from '../../utils';
import { expectedSsdBoxes, expectDetectionResults } from './expectedResults';
describe('faceDetectionNet', () => {
let imgEl: HTMLImageElement
beforeAll(async () => {
const img = await (await fetch('base/test/images/faces.jpg')).blob()
imgEl = await faceapi.bufferToImage(img)
})
describeWithNets('uncompressed weights', { withFaceDetectionNet: { quantized: false } }, ({ faceDetectionNet }) => {
it('scores > 0.8', async () => {
const detections = await faceDetectionNet.locateFaces(imgEl) as faceapi.FaceDetection[]
expect(detections.length).toEqual(3)
const expectedScores = [-1, -1, 0.98, 0.88, 0.81, -1]
const maxBoxDelta = 3
expectDetectionResults(detections, expectedSsdBoxes, expectedScores, maxBoxDelta)
})
it('scores > 0.5', async () => {
const detections = await faceDetectionNet.locateFaces(imgEl, 0.5) as faceapi.FaceDetection[]
expect(detections.length).toEqual(6)
const expectedScores = [0.57, 0.74, 0.98, 0.88, 0.81, 0.58]
const maxBoxDelta = 3
expectDetectionResults(detections, expectedSsdBoxes, expectedScores, maxBoxDelta)
})
})
describeWithNets('quantized weights', { withFaceDetectionNet: { quantized: true } }, ({ faceDetectionNet }) => {
it('scores > 0.8', async () => {
const detections = await faceDetectionNet.locateFaces(imgEl) as faceapi.FaceDetection[]
expect(detections.length).toEqual(4)
const expectedScores = [-1, 0.81, 0.97, 0.88, 0.84, -1]
const maxBoxDelta = 4
expectDetectionResults(detections, expectedSsdBoxes, expectedScores, maxBoxDelta)
})
it('scores > 0.5', async () => {
const detections = await faceDetectionNet.locateFaces(imgEl, 0.5) as faceapi.FaceDetection[]
expect(detections.length).toEqual(6)
const expectedScores = [0.54, 0.81, 0.97, 0.88, 0.84, 0.61]
const maxBoxDelta = 5
expectDetectionResults(detections, expectedSsdBoxes, expectedScores, maxBoxDelta)
})
})
describe('no memory leaks', () => {
describe('NeuralNetwork, uncompressed model', () => {
it('disposes all param tensors', async () => {
await expectAllTensorsReleased(async () => {
const res = await fetch('base/weights_uncompressed/ssd_mobilenetv1_model.weights')
const weights = new Float32Array(await res.arrayBuffer())
const net = faceapi.createFaceDetectionNet(weights)
net.dispose()
})
})
})
describe('NeuralNetwork, quantized model', () => {
it('disposes all param tensors', async () => {
await expectAllTensorsReleased(async () => {
const net = new faceapi.FaceDetectionNet()
await net.load('base/weights')
net.dispose()
})
})
})
})
})
\ No newline at end of file
import { TinyYolov2Types } from 'tfjs-tiny-yolov2';
import { bufferToImage, createTinyYolov2, TinyYolov2 } from '../../../src';
import { describeWithNets, expectAllTensorsReleased } from '../../utils';
import { expectDetectionResults, expectedTinyYolov2Boxes } from './expectedResults';
describe('tinyYolov2', () => {
let imgEl: HTMLImageElement
beforeAll(async () => {
const img = await (await fetch('base/test/images/faces.jpg')).blob()
imgEl = await bufferToImage(img)
})
describeWithNets('quantized weights', { withTinyYolov2: { quantized: true, withSeparableConv: false } }, ({ tinyYolov2 }) => {
it('inputSize lg, finds all faces', async () => {
const detections = await tinyYolov2.locateFaces(imgEl, { inputSize: TinyYolov2Types.SizeType.LG })
const expectedScores = [0.8, 0.85, 0.86, 0.83, 0.86, 0.81]
const maxBoxDelta = 4
expect(detections.length).toEqual(6)
expectDetectionResults(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
})
it('inputSize md, finds all faces', async () => {
const detections = await tinyYolov2.locateFaces(imgEl, { inputSize: TinyYolov2Types.SizeType.MD })
const expectedScores = [0.89, 0.81, 0.82, 0.72, 0.81, 0.86]
const maxBoxDelta = 27
expect(detections.length).toEqual(6)
expectDetectionResults(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
})
it('inputSize custom, finds all faces', async () => {
const detections = await tinyYolov2.locateFaces(imgEl, { inputSize: 416 })
const expectedScores = [0.89, 0.81, 0.82, 0.72, 0.81, 0.86]
const maxBoxDelta = 27
expect(detections.length).toEqual(6)
expectDetectionResults(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
})
})
describeWithNets('uncompressed weights', { withTinyYolov2: { quantized: false, withSeparableConv: false } }, ({ tinyYolov2 }) => {
it('inputSize lg, finds all faces', async () => {
const detections = await tinyYolov2.locateFaces(imgEl, { inputSize: TinyYolov2Types.SizeType.LG })
const expectedScores = [0.81, 0.85, 0.86, 0.83, 0.86, 0.81]
const maxBoxDelta = 1
expect(detections.length).toEqual(6)
expectDetectionResults(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
})
it('inputSize md, finds all faces', async () => {
const detections = await tinyYolov2.locateFaces(imgEl, { inputSize: TinyYolov2Types.SizeType.MD })
const expectedScores = [0.89, 0.82, 0.82, 0.72, 0.81, 0.86]
const maxBoxDelta = 24
expect(detections.length).toEqual(6)
expectDetectionResults(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
})
it('inputSize custom, finds all faces', async () => {
const detections = await tinyYolov2.locateFaces(imgEl, { inputSize: 416 })
const expectedScores = [0.89, 0.82, 0.82, 0.72, 0.81, 0.86]
const maxBoxDelta = 24
expect(detections.length).toEqual(6)
expectDetectionResults(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
})
})
describe('no memory leaks', () => {
describe('NeuralNetwork, uncompressed model', () => {
it('disposes all param tensors', async () => {
await expectAllTensorsReleased(async () => {
const res = await fetch('base/weights_uncompressed/tiny_yolov2_model.weights')
const weights = new Float32Array(await res.arrayBuffer())
const net = createTinyYolov2(weights, false)
net.dispose()
})
})
})
describe('NeuralNetwork, quantized model', () => {
it('disposes all param tensors', async () => {
await expectAllTensorsReleased(async () => {
const net = new TinyYolov2(false)
await net.load('base/weights_unused')
net.dispose()
})
})
})
})
})
\ No newline at end of file
import * as tf from '@tensorflow/tfjs-core';
import { bufferToImage, Dimensions, isTensor3D, NetInput, Point, TMediaElement, toNetInput } from '../../../src';
import { fetchImage, fetchJson, IDimensions, isTensor3D, NetInput, Point, TMediaElement, toNetInput } from '../../../src';
import { FaceLandmarks68 } from '../../../src/classes/FaceLandmarks68';
import { createFaceLandmarkNet } from '../../../src/faceLandmarkNet';
import { FaceLandmark68Net } from '../../../src/faceLandmarkNet/FaceLandmark68Net';
import { describeWithNets, expectAllTensorsReleased, expectMaxDelta, expectPointClose } from '../../utils';
function getInputDims (input: tf.Tensor | TMediaElement): Dimensions {
function getInputDims (input: tf.Tensor | TMediaElement): IDimensions {
if (input instanceof tf.Tensor) {
const [height, width] = input.shape.slice(isTensor3D(input) ? 0 : 1)
return { width, height }
......@@ -24,47 +24,12 @@ describe('faceLandmark68Net', () => {
let faceLandmarkPositionsRect: Point[]
beforeAll(async () => {
const img1 = await (await fetch('base/test/images/face1.png')).blob()
imgEl1 = await bufferToImage(img1)
const img2 = await (await fetch('base/test/images/face2.png')).blob()
imgEl2 = await bufferToImage(img2)
const imgRect = await (await fetch('base/test/images/face_rectangular.png')).blob()
imgElRect = await bufferToImage(imgRect)
faceLandmarkPositions1 = await (await fetch('base/test/data/faceLandmarkPositions1.json')).json()
faceLandmarkPositions2 = await (await fetch('base/test/data/faceLandmarkPositions2.json')).json()
faceLandmarkPositionsRect = await (await fetch('base/test/data/faceLandmarkPositionsRect.json')).json()
})
describeWithNets('uncompressed weights', { withFaceLandmark68Net: { quantized: false } }, ({ faceLandmark68Net }) => {
it('computes face landmarks for squared input', async () => {
const { width, height } = imgEl1
const result = await faceLandmark68Net.detectLandmarks(imgEl1) as FaceLandmarks68
expect(result.getImageWidth()).toEqual(width)
expect(result.getImageHeight()).toEqual(height)
expect(result.getShift().x).toEqual(0)
expect(result.getShift().y).toEqual(0)
result.getPositions().forEach((pt, i) => {
const { x, y } = faceLandmarkPositions1[i]
expectPointClose(pt, { x, y }, 1)
})
})
it('computes face landmarks for rectangular input', async () => {
const { width, height } = imgElRect
const result = await faceLandmark68Net.detectLandmarks(imgElRect) as FaceLandmarks68
expect(result.getImageWidth()).toEqual(width)
expect(result.getImageHeight()).toEqual(height)
expect(result.getShift().x).toEqual(0)
expect(result.getShift().y).toEqual(0)
result.getPositions().forEach((pt, i) => {
const { x, y } = faceLandmarkPositionsRect[i]
expectPointClose(pt, { x, y }, 2)
})
})
imgEl1 = await fetchImage('base/test/images/face1.png')
imgEl2 = await fetchImage('base/test/images/face2.png')
imgElRect = await fetchImage('base/test/images/face_rectangular.png')
faceLandmarkPositions1 = await fetchJson<Point[]>('base/test/data/faceLandmarkPositions1.json')
faceLandmarkPositions2 = await fetchJson<Point[]>('base/test/data/faceLandmarkPositions2.json')
faceLandmarkPositionsRect = await fetchJson<Point[]>('base/test/data/faceLandmarkPositionsRect.json')
})
describeWithNets('quantized weights', { withFaceLandmark68Net: { quantized: true } }, ({ faceLandmark68Net }) => {
......@@ -73,11 +38,11 @@ describe('faceLandmark68Net', () => {
const { width, height } = imgEl1
const result = await faceLandmark68Net.detectLandmarks(imgEl1) as FaceLandmarks68
expect(result.getImageWidth()).toEqual(width)
expect(result.getImageHeight()).toEqual(height)
expect(result.getShift().x).toEqual(0)
expect(result.getShift().y).toEqual(0)
result.getPositions().forEach((pt, i) => {
expect(result.imageWidth).toEqual(width)
expect(result.imageHeight).toEqual(height)
expect(result.shift.x).toEqual(0)
expect(result.shift.y).toEqual(0)
result.positions.forEach((pt, i) => {
const { x, y } = faceLandmarkPositions1[i]
expectPointClose(pt, { x, y }, 2)
})
......@@ -87,11 +52,11 @@ describe('faceLandmark68Net', () => {
const { width, height } = imgElRect
const result = await faceLandmark68Net.detectLandmarks(imgElRect) as FaceLandmarks68
expect(result.getImageWidth()).toEqual(width)
expect(result.getImageHeight()).toEqual(height)
expect(result.getShift().x).toEqual(0)
expect(result.getShift().y).toEqual(0)
result.getPositions().forEach((pt, i) => {
expect(result.imageWidth).toEqual(width)
expect(result.imageHeight).toEqual(height)
expect(result.shift.x).toEqual(0)
expect(result.shift.y).toEqual(0)
result.positions.forEach((pt, i) => {
const { x, y } = faceLandmarkPositionsRect[i]
expectPointClose(pt, { x, y }, 6)
})
......@@ -115,11 +80,11 @@ describe('faceLandmark68Net', () => {
expect(results.length).toEqual(3)
results.forEach((result, batchIdx) => {
const { width, height } = getInputDims(inputs[batchIdx])
expect(result.getImageWidth()).toEqual(width)
expect(result.getImageHeight()).toEqual(height)
expect(result.getShift().x).toEqual(0)
expect(result.getShift().y).toEqual(0)
result.getPositions().forEach(({ x, y }, i) => {
expect(result.imageWidth).toEqual(width)
expect(result.imageHeight).toEqual(height)
expect(result.shift.x).toEqual(0)
expect(result.shift.y).toEqual(0)
result.positions.forEach(({ x, y }, i) => {
expectMaxDelta(x, faceLandmarkPositions[batchIdx][i].x, 2)
expectMaxDelta(y, faceLandmarkPositions[batchIdx][i].y, 2)
})
......@@ -140,11 +105,11 @@ describe('faceLandmark68Net', () => {
expect(results.length).toEqual(3)
results.forEach((result, batchIdx) => {
const { width, height } = getInputDims(inputs[batchIdx])
expect(result.getImageWidth()).toEqual(width)
expect(result.getImageHeight()).toEqual(height)
expect(result.getShift().x).toEqual(0)
expect(result.getShift().y).toEqual(0)
result.getPositions().forEach(({ x, y }, i) => {
expect(result.imageWidth).toEqual(width)
expect(result.imageHeight).toEqual(height)
expect(result.shift.x).toEqual(0)
expect(result.shift.y).toEqual(0)
result.positions.forEach(({ x, y }, i) => {
expectMaxDelta(x, faceLandmarkPositions[batchIdx][i].x, 3)
expectMaxDelta(y, faceLandmarkPositions[batchIdx][i].y, 3)
})
......@@ -165,11 +130,11 @@ describe('faceLandmark68Net', () => {
expect(results.length).toEqual(3)
results.forEach((result, batchIdx) => {
const { width, height } = getInputDims(inputs[batchIdx])
expect(result.getImageWidth()).toEqual(width)
expect(result.getImageHeight()).toEqual(height)
expect(result.getShift().x).toEqual(0)
expect(result.getShift().y).toEqual(0)
result.getPositions().forEach(({ x, y }, i) => {
expect(result.imageWidth).toEqual(width)
expect(result.imageHeight).toEqual(height)
expect(result.shift.x).toEqual(0)
expect(result.shift.y).toEqual(0)
result.positions.forEach(({ x, y }, i) => {
expectMaxDelta(x, faceLandmarkPositions[batchIdx][i].x, 3)
expectMaxDelta(y, faceLandmarkPositions[batchIdx][i].y, 3)
})
......
import { fetchImage, fetchJson, Point } from '../../../src';
import { FaceLandmarks68 } from '../../../src/classes/FaceLandmarks68';
import { describeWithNets, expectPointClose } from '../../utils';
describe('faceLandmark68Net, uncompressed', () => {
let imgEl1: HTMLImageElement
let imgElRect: HTMLImageElement
let faceLandmarkPositions1: Point[]
let faceLandmarkPositionsRect: Point[]
beforeAll(async () => {
imgEl1 = await fetchImage('base/test/images/face1.png')
imgElRect = await fetchImage('base/test/images/face_rectangular.png')
faceLandmarkPositions1 = await fetchJson<Point[]>('base/test/data/faceLandmarkPositions1.json')
faceLandmarkPositionsRect = await fetchJson<Point[]>('base/test/data/faceLandmarkPositionsRect.json')
})
describeWithNets('uncompressed weights', { withFaceLandmark68Net: { quantized: false } }, ({ faceLandmark68Net }) => {
it('computes face landmarks for squared input', async () => {
const { width, height } = imgEl1
const result = await faceLandmark68Net.detectLandmarks(imgEl1) as FaceLandmarks68
expect(result.imageWidth).toEqual(width)
expect(result.imageHeight).toEqual(height)
expect(result.shift.x).toEqual(0)
expect(result.shift.y).toEqual(0)
result.positions.forEach((pt, i) => {
const { x, y } = faceLandmarkPositions1[i]
expectPointClose(pt, { x, y }, 1)
})
})
it('computes face landmarks for rectangular input', async () => {
const { width, height } = imgElRect
const result = await faceLandmark68Net.detectLandmarks(imgElRect) as FaceLandmarks68
expect(result.imageWidth).toEqual(width)
expect(result.imageHeight).toEqual(height)
expect(result.shift.x).toEqual(0)
expect(result.shift.y).toEqual(0)
result.positions.forEach((pt, i) => {
const { x, y } = faceLandmarkPositionsRect[i]
expectPointClose(pt, { x, y }, 2)
})
})
})
})
import * as tf from '@tensorflow/tfjs-core';
import { bufferToImage, Dimensions, isTensor3D, NetInput, Point, TMediaElement, toNetInput } from '../../../src';
import { fetchImage, fetchJson, IDimensions, isTensor3D, NetInput, Point, TMediaElement, toNetInput } from '../../../src';
import { FaceLandmarks68 } from '../../../src/classes/FaceLandmarks68';
import { createFaceLandmarkNet } from '../../../src/faceLandmarkNet';
import { FaceLandmark68TinyNet } from '../../../src/faceLandmarkNet/FaceLandmark68TinyNet';
import { describeWithNets, expectAllTensorsReleased, expectMaxDelta, expectPointClose } from '../../utils';
import { describeWithNets, expectAllTensorsReleased, expectPointClose } from '../../utils';
function getInputDims (input: tf.Tensor | TMediaElement): Dimensions {
function getInputDims (input: tf.Tensor | TMediaElement): IDimensions {
if (input instanceof tf.Tensor) {
const [height, width] = input.shape.slice(isTensor3D(input) ? 0 : 1)
return { width, height }
......@@ -24,47 +24,12 @@ describe('faceLandmark68TinyNet', () => {
let faceLandmarkPositionsRect: Point[]
beforeAll(async () => {
const img1 = await (await fetch('base/test/images/face1.png')).blob()
imgEl1 = await bufferToImage(img1)
const img2 = await (await fetch('base/test/images/face2.png')).blob()
imgEl2 = await bufferToImage(img2)
const imgRect = await (await fetch('base/test/images/face_rectangular.png')).blob()
imgElRect = await bufferToImage(imgRect)
faceLandmarkPositions1 = await (await fetch('base/test/data/faceLandmarkPositions1Tiny.json')).json()
faceLandmarkPositions2 = await (await fetch('base/test/data/faceLandmarkPositions2Tiny.json')).json()
faceLandmarkPositionsRect = await (await fetch('base/test/data/faceLandmarkPositionsRectTiny.json')).json()
})
describeWithNets('uncompressed weights', { withFaceLandmark68TinyNet: { quantized: false } }, ({ faceLandmark68TinyNet }) => {
it('computes face landmarks for squared input', async () => {
const { width, height } = imgEl1
const result = await faceLandmark68TinyNet.detectLandmarks(imgEl1) as FaceLandmarks68
expect(result.getImageWidth()).toEqual(width)
expect(result.getImageHeight()).toEqual(height)
expect(result.getShift().x).toEqual(0)
expect(result.getShift().y).toEqual(0)
result.getPositions().forEach((pt, i) => {
const { x, y } = faceLandmarkPositions1[i]
expectPointClose(pt, { x, y }, 5)
})
})
it('computes face landmarks for rectangular input', async () => {
const { width, height } = imgElRect
const result = await faceLandmark68TinyNet.detectLandmarks(imgElRect) as FaceLandmarks68
expect(result.getImageWidth()).toEqual(width)
expect(result.getImageHeight()).toEqual(height)
expect(result.getShift().x).toEqual(0)
expect(result.getShift().y).toEqual(0)
result.getPositions().forEach((pt, i) => {
const { x, y } = faceLandmarkPositionsRect[i]
expectPointClose(pt, { x, y }, 5)
})
})
imgEl1 = await fetchImage('base/test/images/face1.png')
imgEl2 = await fetchImage('base/test/images/face2.png')
imgElRect = await fetchImage('base/test/images/face_rectangular.png')
faceLandmarkPositions1 = await fetchJson<Point[]>('base/test/data/faceLandmarkPositions1Tiny.json')
faceLandmarkPositions2 = await fetchJson<Point[]>('base/test/data/faceLandmarkPositions2Tiny.json')
faceLandmarkPositionsRect = await fetchJson<Point[]>('base/test/data/faceLandmarkPositionsRectTiny.json')
})
describeWithNets('quantized weights', { withFaceLandmark68TinyNet: { quantized: true } }, ({ faceLandmark68TinyNet }) => {
......@@ -73,11 +38,11 @@ describe('faceLandmark68TinyNet', () => {
const { width, height } = imgEl1
const result = await faceLandmark68TinyNet.detectLandmarks(imgEl1) as FaceLandmarks68
expect(result.getImageWidth()).toEqual(width)
expect(result.getImageHeight()).toEqual(height)
expect(result.getShift().x).toEqual(0)
expect(result.getShift().y).toEqual(0)
result.getPositions().forEach((pt, i) => {
expect(result.imageWidth).toEqual(width)
expect(result.imageHeight).toEqual(height)
expect(result.shift.x).toEqual(0)
expect(result.shift.y).toEqual(0)
result.positions.forEach((pt, i) => {
const { x, y } = faceLandmarkPositions1[i]
expectPointClose(pt, { x, y }, 5)
})
......@@ -87,11 +52,11 @@ describe('faceLandmark68TinyNet', () => {
const { width, height } = imgElRect
const result = await faceLandmark68TinyNet.detectLandmarks(imgElRect) as FaceLandmarks68
expect(result.getImageWidth()).toEqual(width)
expect(result.getImageHeight()).toEqual(height)
expect(result.getShift().x).toEqual(0)
expect(result.getShift().y).toEqual(0)
result.getPositions().forEach((pt, i) => {
expect(result.imageWidth).toEqual(width)
expect(result.imageHeight).toEqual(height)
expect(result.shift.x).toEqual(0)
expect(result.shift.y).toEqual(0)
result.positions.forEach((pt, i) => {
const { x, y } = faceLandmarkPositionsRect[i]
expectPointClose(pt, { x, y }, 5)
})
......@@ -99,7 +64,7 @@ describe('faceLandmark68TinyNet', () => {
})
describeWithNets('batch inputs', { withFaceLandmark68TinyNet: { quantized: false } }, ({ faceLandmark68TinyNet }) => {
describeWithNets('batch inputs', { withFaceLandmark68TinyNet: { quantized: true } }, ({ faceLandmark68TinyNet }) => {
it('computes face landmarks for batch of image elements', async () => {
const inputs = [imgEl1, imgEl2, imgElRect]
......@@ -115,11 +80,11 @@ describe('faceLandmark68TinyNet', () => {
expect(results.length).toEqual(3)
results.forEach((result, batchIdx) => {
const { width, height } = getInputDims(inputs[batchIdx])
expect(result.getImageWidth()).toEqual(width)
expect(result.getImageHeight()).toEqual(height)
expect(result.getShift().x).toEqual(0)
expect(result.getShift().y).toEqual(0)
result.getPositions().forEach((pt, i) => {
expect(result.imageWidth).toEqual(width)
expect(result.imageHeight).toEqual(height)
expect(result.shift.x).toEqual(0)
expect(result.shift.y).toEqual(0)
result.positions.forEach((pt, i) => {
const { x, y } = faceLandmarkPositions[batchIdx][i]
expectPointClose(pt, { x, y }, 5)
})
......@@ -140,11 +105,11 @@ describe('faceLandmark68TinyNet', () => {
expect(results.length).toEqual(3)
results.forEach((result, batchIdx) => {
const { width, height } = getInputDims(inputs[batchIdx])
expect(result.getImageWidth()).toEqual(width)
expect(result.getImageHeight()).toEqual(height)
expect(result.getShift().x).toEqual(0)
expect(result.getShift().y).toEqual(0)
result.getPositions().forEach((pt, i) => {
expect(result.imageWidth).toEqual(width)
expect(result.imageHeight).toEqual(height)
expect(result.shift.x).toEqual(0)
expect(result.shift.y).toEqual(0)
result.positions.forEach((pt, i) => {
const { x, y } = faceLandmarkPositions[batchIdx][i]
expectPointClose(pt, { x, y }, 3)
})
......@@ -165,11 +130,11 @@ describe('faceLandmark68TinyNet', () => {
expect(results.length).toEqual(3)
results.forEach((result, batchIdx) => {
const { width, height } = getInputDims(inputs[batchIdx])
expect(result.getImageWidth()).toEqual(width)
expect(result.getImageHeight()).toEqual(height)
expect(result.getShift().x).toEqual(0)
expect(result.getShift().y).toEqual(0)
result.getPositions().forEach((pt, i) => {
expect(result.imageWidth).toEqual(width)
expect(result.imageHeight).toEqual(height)
expect(result.shift.x).toEqual(0)
expect(result.shift.y).toEqual(0)
result.positions.forEach((pt, i) => {
const { x, y } = faceLandmarkPositions[batchIdx][i]
expectPointClose(pt, { x, y }, 3)
})
......
import { fetchImage, fetchJson, Point } from '../../../src';
import { FaceLandmarks68 } from '../../../src/classes/FaceLandmarks68';
import { describeWithNets, expectPointClose } from '../../utils';
describe('faceLandmark68TinyNet, uncompressed', () => {
let imgEl1: HTMLImageElement
let imgElRect: HTMLImageElement
let faceLandmarkPositions1: Point[]
let faceLandmarkPositionsRect: Point[]
beforeAll(async () => {
imgEl1 = await fetchImage('base/test/images/face1.png')
imgElRect = await fetchImage('base/test/images/face_rectangular.png')
faceLandmarkPositions1 = await fetchJson<Point[]>('base/test/data/faceLandmarkPositions1Tiny.json')
faceLandmarkPositionsRect = await fetchJson<Point[]>('base/test/data/faceLandmarkPositionsRectTiny.json')
})
describeWithNets('uncompressed weights', { withFaceLandmark68TinyNet: { quantized: false } }, ({ faceLandmark68TinyNet }) => {
it('computes face landmarks for squared input', async () => {
const { width, height } = imgEl1
const result = await faceLandmark68TinyNet.detectLandmarks(imgEl1) as FaceLandmarks68
expect(result.imageWidth).toEqual(width)
expect(result.imageHeight).toEqual(height)
expect(result.shift.x).toEqual(0)
expect(result.shift.y).toEqual(0)
result.positions.forEach((pt, i) => {
const { x, y } = faceLandmarkPositions1[i]
expectPointClose(pt, { x, y }, 5)
})
})
it('computes face landmarks for rectangular input', async () => {
const { width, height } = imgElRect
const result = await faceLandmark68TinyNet.detectLandmarks(imgElRect) as FaceLandmarks68
expect(result.imageWidth).toEqual(width)
expect(result.imageHeight).toEqual(height)
expect(result.shift.x).toEqual(0)
expect(result.shift.y).toEqual(0)
result.positions.forEach((pt, i) => {
const { x, y } = faceLandmarkPositionsRect[i]
expectPointClose(pt, { x, y }, 5)
})
})
})
})
import * as tf from '@tensorflow/tfjs-core';
import { bufferToImage, FaceRecognitionNet, NetInput, toNetInput } from '../../../src';
import { FaceRecognitionNet, fetchImage, fetchJson, NetInput, toNetInput } from '../../../src';
import { euclideanDistance } from '../../../src/euclideanDistance';
import { createFaceRecognitionNet } from '../../../src/faceRecognitionNet';
import { describeWithNets, expectAllTensorsReleased } from '../../utils';
......@@ -15,18 +15,14 @@ describe('faceRecognitionNet', () => {
let faceDescriptorRect: number[]
beforeAll(async () => {
const img1 = await (await fetch('base/test/images/face1.png')).blob()
imgEl1 = await bufferToImage(img1)
const img2 = await (await fetch('base/test/images/face2.png')).blob()
imgEl2 = await bufferToImage(img2)
const imgRect = await (await fetch('base/test/images/face_rectangular.png')).blob()
imgElRect = await bufferToImage(imgRect)
faceDescriptor1 = await (await fetch('base/test/data/faceDescriptor1.json')).json()
faceDescriptor2 = await (await fetch('base/test/data/faceDescriptor2.json')).json()
faceDescriptorRect = await (await fetch('base/test/data/faceDescriptorRect.json')).json()
imgEl1 = await fetchImage('base/test/images/face1.png')
imgEl2 = await fetchImage('base/test/images/face2.png')
imgElRect = await fetchImage('base/test/images/face_rectangular.png')
faceDescriptor1 = await fetchJson<number[]>('base/test/data/faceDescriptor1.json')
faceDescriptor2 = await fetchJson<number[]>('base/test/data/faceDescriptor2.json')
faceDescriptorRect = await fetchJson<number[]>('base/test/data/faceDescriptorRect.json')
})
describeWithNets('uncompressed weights', { withFaceRecognitionNet: { quantized: false } }, ({ faceRecognitionNet }) => {
describeWithNets('quantized weights', { withFaceRecognitionNet: { quantized: true } }, ({ faceRecognitionNet }) => {
it('computes face descriptor for squared input', async () => {
const result = await faceRecognitionNet.computeFaceDescriptor(imgEl1) as Float32Array
......@@ -42,26 +38,8 @@ describe('faceRecognitionNet', () => {
})
// TODO: figure out why descriptors return NaN in the test cases
/*
describeWithNets('quantized weights', { withFaceRecognitionNet: { quantized: true } }, ({ faceRecognitionNet }) => {
it('computes face descriptor for squared input', async () => {
const result = await faceRecognitionNet.computeFaceDescriptor(imgEl1) as Float32Array
expect(result.length).toEqual(128)
expect(result).toEqual(new Float32Array(faceDescriptor1))
})
it('computes face descriptor for rectangular input', async () => {
const result = await faceRecognitionNet.computeFaceDescriptor(imgElRect) as Float32Array
expect(result.length).toEqual(128)
expect(result).toEqual(new Float32Array(faceDescriptorRect))
})
})
*/
describeWithNets('batch inputs', { withFaceRecognitionNet: { quantized: false } }, ({ faceRecognitionNet }) => {
describeWithNets('batch inputs', { withFaceRecognitionNet: { quantized: true } }, ({ faceRecognitionNet }) => {
it('computes face descriptors for batch of image elements', async () => {
const inputs = [imgEl1, imgEl2, imgElRect]
......@@ -116,7 +94,7 @@ describe('faceRecognitionNet', () => {
})
describeWithNets('no memory leaks', { withFaceRecognitionNet: { quantized: false } }, ({ faceRecognitionNet }) => {
describeWithNets('no memory leaks', { withFaceRecognitionNet: { quantized: true } }, ({ faceRecognitionNet }) => {
describe('NeuralNetwork, uncompressed model', () => {
......
import { fetchImage, fetchJson } from '../../../src';
import { euclideanDistance } from '../../../src/euclideanDistance';
import { describeWithNets } from '../../utils';
// TODO: figure out why quantized weights results in NaNs in testcases
// apparently (net weight values differ when loading with karma)
xdescribe('faceRecognitionNet, uncompressed', () => {
let imgEl1: HTMLImageElement
let imgElRect: HTMLImageElement
let faceDescriptor1: number[]
let faceDescriptorRect: number[]
beforeAll(async () => {
imgEl1 = await fetchImage('base/test/images/face1.png')
imgElRect = await fetchImage('base/test/images/face_rectangular.png')
faceDescriptor1 = await fetchJson<number[]>('base/test/data/faceDescriptor1.json')
faceDescriptorRect = await fetchJson<number[]>('base/test/data/faceDescriptorRect.json')
})
describeWithNets('uncompressed weights', { withFaceRecognitionNet: { quantized: false } }, ({ faceRecognitionNet }) => {
it('computes face descriptor for squared input', async () => {
const result = await faceRecognitionNet.computeFaceDescriptor(imgEl1) as Float32Array
expect(result.length).toEqual(128)
expect(euclideanDistance(result, faceDescriptor1)).toBeLessThan(0.1)
})
it('computes face descriptor for rectangular input', async () => {
const result = await faceRecognitionNet.computeFaceDescriptor(imgElRect) as Float32Array
expect(result.length).toEqual(128)
expect(euclideanDistance(result, faceDescriptorRect)).toBeLessThan(0.1)
})
})
})
\ No newline at end of file
import { IPoint, IRect } from '../../../src';
import { FaceDetectionWithLandmarks } from '../../../src/classes/FaceDetectionWithLandmarks';
import { FaceLandmarks5 } from '../../../src/classes/FaceLandmarks5';
import { BoxAndLandmarksDeltas, expectFaceDetectionsWithLandmarks } from '../../expectFaceDetectionsWithLandmarks';
import { sortBoxes, sortByDistanceToOrigin } from '../../utils';
export const expectedMtcnnBoxes: IRect[] = sortBoxes([
{ x: 70, y: 21, width: 112, height: 112 },
{ x: 36, y: 250, width: 133, height: 132 },
{ x: 221, y: 43, width: 112, height: 111 },
{ x: 247, y: 231, width: 106, height: 107 },
{ x: 566, y: 67, width: 104, height: 104 },
{ x: 451, y: 176, width: 122, height: 122 }
])
export function expectMtcnnResults(
results: FaceDetectionWithLandmarks<FaceLandmarks5>[],
expectedMtcnnFaceLandmarks: IPoint[][],
expectedScores: number[],
deltas: BoxAndLandmarksDeltas
) {
const expectedMtcnnFaceLandmarksSorted = sortByDistanceToOrigin(expectedMtcnnFaceLandmarks, obj => obj[0])
const expectedResults = expectedMtcnnBoxes
.map((detection, i) => ({ detection, landmarks: expectedMtcnnFaceLandmarksSorted[i] }))
return expectFaceDetectionsWithLandmarks<FaceLandmarks5>(results, expectedResults, expectedScores, deltas)
}
\ No newline at end of file
import * as faceapi from '../../../src';
import { describeWithNets, expectAllTensorsReleased, sortByDistanceToOrigin } from '../../utils';
import { expectMtcnnResults } from './expectedResults';
import { IPoint } from '../../../src';
import { describeWithNets, expectAllTensorsReleased } from '../../utils';
import { expectMtcnnResults } from './expectMtcnnResults';
import { IPoint, fetchImage, fetchJson } from '../../../src';
describe('mtcnn', () => {
describe('mtcnn.forward', () => {
let imgEl: HTMLImageElement
let expectedMtcnnLandmarks: IPoint[][]
beforeAll(async () => {
const img = await (await fetch('base/test/images/faces.jpg')).blob()
imgEl = await faceapi.bufferToImage(img)
expectedMtcnnLandmarks = await (await fetch('base/test/data/mtcnnFaceLandmarkPositions.json')).json()
imgEl = await fetchImage('base/test/images/faces.jpg')
expectedMtcnnLandmarks = await fetchJson<IPoint[][]>('base/test/data/mtcnnFaceLandmarkPositions.json')
})
describeWithNets('uncompressed weights', { withMtcnn: { quantized: false } }, ({ mtcnn }) => {
......@@ -30,7 +28,7 @@ describe('mtcnn', () => {
maxBoxDelta: 2,
maxLandmarksDelta: 5
}
expectMtcnnResults(results, expectedMtcnnLandmarks, deltas)
expectMtcnnResults(results, expectedMtcnnLandmarks, [1.0, 1.0, 1.0, 1.0, 0.99, 0.99], deltas)
})
it('minFaceSize = 80, finds all faces', async () => {
......@@ -45,7 +43,7 @@ describe('mtcnn', () => {
maxBoxDelta: 15,
maxLandmarksDelta: 13
}
expectMtcnnResults(results, expectedMtcnnLandmarks, deltas)
expectMtcnnResults(results, expectedMtcnnLandmarks, [1.0, 1.0, 1.0, 1.0, 1.0, 0.99], deltas)
})
it('all optional params passed, finds all faces', async () => {
......@@ -63,7 +61,7 @@ describe('mtcnn', () => {
maxBoxDelta: 8,
maxLandmarksDelta: 7
}
expectMtcnnResults(results, expectedMtcnnLandmarks, deltas)
expectMtcnnResults(results, expectedMtcnnLandmarks, [1.0, 1.0, 1.0, 0.99, 1.0, 1.0], deltas)
})
it('scale steps passed, finds all faces', async () => {
......@@ -78,7 +76,7 @@ describe('mtcnn', () => {
maxBoxDelta: 8,
maxLandmarksDelta: 10
}
expectMtcnnResults(results, expectedMtcnnLandmarks, deltas)
expectMtcnnResults(results, expectedMtcnnLandmarks, [1.0, 1.0, 1.0, 1.0, 1.0, 1.0], deltas)
})
})
......
import * as faceapi from '../../../src';
import { describeWithNets, expectAllTensorsReleased, assembleExpectedFullFaceDescriptions, ExpectedFullFaceDescription } from '../../utils';
import { expectedMtcnnBoxes } from './expectMtcnnResults';
import { fetchImage } from '../../../src';
import { MtcnnOptions } from '../../../src/mtcnn/MtcnnOptions';
import { expectFaceDetections } from '../../expectFaceDetections';
import { expectFullFaceDescriptions } from '../../expectFullFaceDescriptions';
import { expectFaceDetectionsWithLandmarks } from '../../expectFaceDetectionsWithLandmarks';
describe('mtcnn', () => {
let imgEl: HTMLImageElement
let expectedFullFaceDescriptions: ExpectedFullFaceDescription[]
const expectedScores = [1.0, 1.0, 1.0, 1.0, 0.99, 0.99]
beforeAll(async () => {
imgEl = await fetchImage('base/test/images/faces.jpg')
expectedFullFaceDescriptions = await assembleExpectedFullFaceDescriptions(expectedMtcnnBoxes)
})
describeWithNets('detectAllFaces', { withAllFacesMtcnn: true }, () => {
it('detectAllFaces', async () => {
const options = new MtcnnOptions({
minFaceSize: 20
})
const results = await faceapi.detectAllFaces(imgEl, options)
const maxBoxDelta = 2
expect(results.length).toEqual(6)
expectFaceDetections(results, expectedMtcnnBoxes, expectedScores, maxBoxDelta)
})
it('detectAllFaces.withFaceLandmarks().withFaceDescriptors()', async () => {
const options = new MtcnnOptions({
minFaceSize: 20
})
const results = await faceapi
.detectAllFaces(imgEl, options)
.withFaceLandmarks()
const deltas = {
maxBoxDelta: 2,
maxLandmarksDelta: 6
}
expect(results.length).toEqual(6)
expectFaceDetectionsWithLandmarks(results, expectedFullFaceDescriptions, expectedScores, deltas)
})
it('detectAllFaces.withFaceLandmarks().withFaceDescriptors()', async () => {
const options = new MtcnnOptions({
minFaceSize: 20
})
const results = await faceapi
.detectAllFaces(imgEl, options)
.withFaceLandmarks()
.withFaceDescriptors()
const deltas = {
maxBoxDelta: 2,
maxLandmarksDelta: 6,
maxDescriptorDelta: 0.4
}
expect(results.length).toEqual(6)
expectFullFaceDescriptions(results, expectedFullFaceDescriptions, expectedScores, deltas)
})
it('no memory leaks', async () => {
await expectAllTensorsReleased(async () => {
await faceapi
.detectAllFaces(imgEl, new MtcnnOptions({ minFaceSize: 200 }))
.withFaceLandmarks()
.withFaceDescriptors()
})
})
})
})
\ No newline at end of file
import { IRect } from '../../../src';
import { sortBoxes } from '../../utils';
export const expectedSsdBoxes: IRect[] = sortBoxes([
{ x: 48, y: 253, width: 104, height: 129 },
{ x: 260, y: 227, width: 76, height: 117 },
{ x: 466, y: 165, width: 88, height: 130 },
{ x: 234, y: 36, width: 84, height: 119 },
{ x: 577, y: 65, width: 84, height: 105 },
{ x: 84, y: 14, width: 79, height: 132 }
])
\ No newline at end of file
import * as faceapi from '../../../src';
import { describeWithNets, expectAllTensorsReleased } from '../../utils';
import { expectFaceDetections } from '../../expectFaceDetections';
import { fetchImage } from '../../../src';
import { expectedSsdBoxes } from './expectedBoxes';
describe('ssdMobilenetv1.locateFaces', () => {
let imgEl: HTMLImageElement
beforeAll(async () => {
imgEl = await fetchImage('base/test/images/faces.jpg')
})
describeWithNets('quantized weights', { withSsdMobilenetv1: { quantized: true } }, ({ ssdMobilenetv1 }) => {
it('scores > 0.8', async () => {
const detections = await ssdMobilenetv1.locateFaces(imgEl, { minConfidence: 0.8 }) as faceapi.FaceDetection[]
expect(detections.length).toEqual(4)
const expectedScores = [-1, 0.81, 0.97, 0.88, 0.84, -1]
const maxBoxDelta = 4
expectFaceDetections(detections, expectedSsdBoxes, expectedScores, maxBoxDelta)
})
it('scores > 0.5', async () => {
const detections = await ssdMobilenetv1.locateFaces(imgEl, { minConfidence: 0.5 }) as faceapi.FaceDetection[]
expect(detections.length).toEqual(6)
const expectedScores = [0.54, 0.81, 0.97, 0.88, 0.84, 0.61]
const maxBoxDelta = 5
expectFaceDetections(detections, expectedSsdBoxes, expectedScores, maxBoxDelta)
})
it('no memory leaks', async () => {
await expectAllTensorsReleased(async () => {
const net = new faceapi.SsdMobilenetv1()
await net.load('base/weights')
net.dispose()
})
})
})
})
\ No newline at end of file
import * as faceapi from '../../../src';
import { describeWithNets, expectAllTensorsReleased } from '../../utils';
import { expectFaceDetections } from '../../expectFaceDetections';
import { fetchImage } from '../../../src';
import { expectedSsdBoxes } from './expectedBoxes';
describe('ssdMobilenetv1.locateFaces, uncompressed', () => {
let imgEl: HTMLImageElement
beforeAll(async () => {
imgEl = await fetchImage('base/test/images/faces.jpg')
})
describeWithNets('uncompressed weights', { withSsdMobilenetv1: { quantized: false } }, ({ ssdMobilenetv1 }) => {
it('scores > 0.8', async () => {
const detections = await ssdMobilenetv1.locateFaces(imgEl, { minConfidence: 0.8 }) as faceapi.FaceDetection[]
expect(detections.length).toEqual(3)
const expectedScores = [-1, -1, 0.98, 0.88, 0.81, -1]
const maxBoxDelta = 3
expectFaceDetections(detections, expectedSsdBoxes, expectedScores, maxBoxDelta)
})
it('scores > 0.5', async () => {
const detections = await ssdMobilenetv1.locateFaces(imgEl, { minConfidence: 0.5 }) as faceapi.FaceDetection[]
expect(detections.length).toEqual(6)
const expectedScores = [0.57, 0.74, 0.98, 0.88, 0.81, 0.58]
const maxBoxDelta = 3
expectFaceDetections(detections, expectedSsdBoxes, expectedScores, maxBoxDelta)
})
it('no memory leaks', async () => {
await expectAllTensorsReleased(async () => {
const res = await fetch('base/weights_uncompressed/ssd_mobilenetv1_model.weights')
const weights = new Float32Array(await res.arrayBuffer())
const net = faceapi.createSsdMobilenetv1(weights)
net.dispose()
})
})
})
})
\ No newline at end of file
import * as faceapi from '../../../src';
import { describeWithNets, expectAllTensorsReleased, assembleExpectedFullFaceDescriptions, ExpectedFullFaceDescription } from '../../utils';
import { fetchImage, SsdMobilenetv1Options } from '../../../src';
import { expectFaceDetections } from '../../expectFaceDetections';
import { expectFullFaceDescriptions } from '../../expectFullFaceDescriptions';
import { expectFaceDetectionsWithLandmarks } from '../../expectFaceDetectionsWithLandmarks';
import { expectedSsdBoxes } from './expectedBoxes';
describe('ssdMobilenetv1', () => {
let imgEl: HTMLImageElement
let expectedFullFaceDescriptions: ExpectedFullFaceDescription[]
const expectedScores = [0.54, 0.81, 0.97, 0.88, 0.84, 0.61]
beforeAll(async () => {
imgEl = await fetchImage('base/test/images/faces.jpg')
expectedFullFaceDescriptions = await assembleExpectedFullFaceDescriptions(expectedSsdBoxes)
})
describeWithNets('globalApi', { withAllFacesSsdMobilenetv1: true }, () => {
it('detectAllFaces', async () => {
const options = new SsdMobilenetv1Options({
minConfidence: 0.5
})
const results = await faceapi.detectAllFaces(imgEl, options)
const maxBoxDelta = 5
expect(results.length).toEqual(6)
expectFaceDetections(results, expectedSsdBoxes, expectedScores, maxBoxDelta)
})
it('detectAllFaces.withFaceLandmarks()', async () => {
const options = new SsdMobilenetv1Options({
minConfidence: 0.5
})
const results = await faceapi
.detectAllFaces(imgEl, options)
.withFaceLandmarks()
const deltas = {
maxBoxDelta: 5,
maxLandmarksDelta: 1
}
expect(results.length).toEqual(6)
expectFaceDetectionsWithLandmarks(results, expectedFullFaceDescriptions, expectedScores, deltas)
})
it('detectAllFaces.withFaceLandmarks().withFaceDescriptors()', async () => {
const options = new SsdMobilenetv1Options({
minConfidence: 0.5
})
const results = await faceapi
.detectAllFaces(imgEl, options)
.withFaceLandmarks()
.withFaceDescriptors()
const deltas = {
maxBoxDelta: 5,
maxLandmarksDelta: 1,
maxDescriptorDelta: 0.01
}
expect(results.length).toEqual(6)
expectFullFaceDescriptions(results, expectedFullFaceDescriptions, expectedScores, deltas)
})
it('no memory leaks', async () => {
await expectAllTensorsReleased(async () => {
await faceapi
.detectAllFaces(imgEl, new SsdMobilenetv1Options())
.withFaceLandmarks()
.withFaceDescriptors()
})
})
})
})
\ No newline at end of file
import { IRect } from '../../../src';
import { sortBoxes } from '../../utils';
export const expectedTinyFaceDetectorBoxes: IRect[] = sortBoxes([
{ x: 29, y: 264, width: 139, height: 137 },
{ x: 224, y: 240, width: 147, height: 128 },
{ x: 547, y: 81, width: 136, height: 114 },
{ x: 214, y: 53, width: 124, height: 119 },
{ x: 430, y: 183, width: 162, height: 143 },
{ x: 54, y: 33, width: 134, height: 114 }
])
import * as faceapi from '../../../src';
import { describeWithNets, expectAllTensorsReleased } from '../../utils';
import { expectFaceDetections } from '../../expectFaceDetections';
import { fetchImage } from '../../../src';
import { expectedTinyFaceDetectorBoxes } from './expectedBoxes';
describe('tinyFaceDetector.locateFaces', () => {
let imgEl: HTMLImageElement
beforeAll(async () => {
imgEl = await fetchImage('base/test/images/faces.jpg')
})
describeWithNets('quantized weights', { withTinyFaceDetector: { quantized: true } }, ({ tinyFaceDetector }) => {
it('inputSize 320, finds all faces', async () => {
const detections = await tinyFaceDetector.locateFaces(imgEl, { inputSize: 320 }) as faceapi.FaceDetection[]
expect(detections.length).toEqual(6)
const expectedScores = [0.77, 0.75, 0.88, 0.77, 0.83, 0.85]
const maxBoxDelta = 36
expectFaceDetections(detections, expectedTinyFaceDetectorBoxes, expectedScores, maxBoxDelta)
})
it('inputSize 416, finds all faces', async () => {
const detections = await tinyFaceDetector.locateFaces(imgEl, { inputSize: 416 }) as faceapi.FaceDetection[]
expect(detections.length).toEqual(6)
const expectedScores = [0.7, 0.82, 0.93, 0.86, 0.79, 0.84]
const maxBoxDelta = 1
expectFaceDetections(detections, expectedTinyFaceDetectorBoxes, expectedScores, maxBoxDelta)
})
it('no memory leaks', async () => {
await expectAllTensorsReleased(async () => {
const net = new faceapi.TinyFaceDetector()
await net.load('base/weights')
net.dispose()
})
})
})
})
\ No newline at end of file
import * as faceapi from '../../../src';
import { describeWithNets, expectAllTensorsReleased, assembleExpectedFullFaceDescriptions, ExpectedFullFaceDescription } from '../../utils';
import { fetchImage, TinyFaceDetectorOptions } from '../../../src';
import { expectFaceDetections } from '../../expectFaceDetections';
import { expectFullFaceDescriptions } from '../../expectFullFaceDescriptions';
import { expectFaceDetectionsWithLandmarks } from '../../expectFaceDetectionsWithLandmarks';
import { expectedTinyFaceDetectorBoxes } from './expectedBoxes';
describe('tinyFaceDetector', () => {
let imgEl: HTMLImageElement
let expectedFullFaceDescriptions: ExpectedFullFaceDescription[]
const expectedScores = [0.7, 0.82, 0.93, 0.86, 0.79, 0.84]
beforeAll(async () => {
imgEl = await fetchImage('base/test/images/faces.jpg')
expectedFullFaceDescriptions = await assembleExpectedFullFaceDescriptions(expectedTinyFaceDetectorBoxes)
})
describeWithNets('globalApi', { withAllFacesTinyFaceDetector: true }, () => {
it('detectAllFaces', async () => {
const options = new TinyFaceDetectorOptions({
inputSize: 416
})
const results = await faceapi.detectAllFaces(imgEl, options)
const maxBoxDelta = 1
expect(results.length).toEqual(6)
expectFaceDetections(results, expectedTinyFaceDetectorBoxes, expectedScores, maxBoxDelta)
})
it('detectAllFaces.withFaceLandmarks()', async () => {
const options = new TinyFaceDetectorOptions({
inputSize: 416
})
const results = await faceapi
.detectAllFaces(imgEl, options)
.withFaceLandmarks()
const deltas = {
maxBoxDelta: 1,
maxLandmarksDelta: 10
}
expect(results.length).toEqual(6)
expectFaceDetectionsWithLandmarks(results, expectedFullFaceDescriptions, expectedScores, deltas)
})
it('detectAllFaces.withFaceLandmarks().withFaceDescriptors()', async () => {
const options = new TinyFaceDetectorOptions({
inputSize: 416
})
const results = await faceapi
.detectAllFaces(imgEl, options)
.withFaceLandmarks()
.withFaceDescriptors()
const deltas = {
maxBoxDelta: 1,
maxLandmarksDelta: 10,
maxDescriptorDelta: 0.2
}
expect(results.length).toEqual(6)
expectFullFaceDescriptions(results, expectedFullFaceDescriptions, expectedScores, deltas)
})
it('no memory leaks', async () => {
await expectAllTensorsReleased(async () => {
await faceapi
.detectAllFaces(imgEl, new TinyFaceDetectorOptions())
.withFaceLandmarks()
.withFaceDescriptors()
})
})
})
})
\ No newline at end of file
import { IRect } from '../../../src';
import { sortBoxes } from '../../utils';
export const expectedTinyYolov2Boxes: IRect[] = sortBoxes([
{ x: 52, y: 263, width: 106, height: 102 },
{ x: 455, y: 191, width: 103, height: 97 },
{ x: 236, y: 57, width: 90, height: 85 },
{ x: 257, y: 243, width: 86, height: 95 },
{ x: 578, y: 76, width: 86, height: 91 },
{ x: 87, y: 30, width: 92, height: 93 }
])
export const expectedTinyYolov2SeparableConvBoxes: IRect[] = sortBoxes([
{ x: 42, y: 257, width: 111, height: 121 },
{ x: 454, y: 175, width: 104, height: 121 },
{ x: 230, y: 45, width: 94, height: 104 },
{ x: 574, y: 62, width: 88, height: 113 },
{ x: 260, y: 233, width: 82, height: 104 },
{ x: 83, y: 24, width: 85, height: 111 }
])
\ No newline at end of file
import { TinyYolov2SizeType } from 'tfjs-tiny-yolov2';
import { fetchImage, TinyYolov2 } from '../../../src';
import { expectFaceDetections } from '../../expectFaceDetections';
import { describeWithNets, expectAllTensorsReleased } from '../../utils';
import { expectedTinyYolov2Boxes } from './expectedBoxes';
xdescribe('tinyYolov2.locateFaces', () => {
let imgEl: HTMLImageElement
beforeAll(async () => {
imgEl = await fetchImage('base/test/images/faces.jpg')
})
describeWithNets('quantized weights', { withTinyYolov2: { quantized: true, withSeparableConv: false } }, ({ tinyYolov2 }) => {
it('inputSize lg, finds all faces', async () => {
const detections = await tinyYolov2.locateFaces(imgEl, { inputSize: TinyYolov2SizeType.LG })
const expectedScores = [0.8, 0.85, 0.86, 0.83, 0.86, 0.81]
const maxBoxDelta = 4
expect(detections.length).toEqual(6)
expectFaceDetections(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
})
it('inputSize md, finds all faces', async () => {
const detections = await tinyYolov2.locateFaces(imgEl, { inputSize: TinyYolov2SizeType.MD })
const expectedScores = [0.89, 0.81, 0.82, 0.72, 0.81, 0.86]
const maxBoxDelta = 27
expect(detections.length).toEqual(6)
expectFaceDetections(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
})
it('inputSize custom, finds all faces', async () => {
const detections = await tinyYolov2.locateFaces(imgEl, { inputSize: 416 })
const expectedScores = [0.89, 0.81, 0.82, 0.72, 0.81, 0.86]
const maxBoxDelta = 27
expect(detections.length).toEqual(6)
expectFaceDetections(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
})
it('no memory leaks', async () => {
await expectAllTensorsReleased(async () => {
const net = new TinyYolov2(false)
await net.load('base/weights_unused')
net.dispose()
})
})
})
})
\ No newline at end of file
import { TinyYolov2SizeType } from 'tfjs-tiny-yolov2';
import { createTinyYolov2, fetchImage } from '../../../src';
import { expectFaceDetections } from '../../expectFaceDetections';
import { describeWithNets, expectAllTensorsReleased } from '../../utils';
import { expectedTinyYolov2Boxes } from './expectedBoxes';
xdescribe('tinyYolov2.locateFaces, uncompressed', () => {
let imgEl: HTMLImageElement
beforeAll(async () => {
imgEl = await fetchImage('base/test/images/faces.jpg')
})
describeWithNets('uncompressed weights', { withTinyYolov2: { quantized: false, withSeparableConv: false } }, ({ tinyYolov2 }) => {
it('inputSize lg, finds all faces', async () => {
const detections = await tinyYolov2.locateFaces(imgEl, { inputSize: TinyYolov2SizeType.LG })
const expectedScores = [0.81, 0.85, 0.86, 0.83, 0.86, 0.81]
const maxBoxDelta = 1
expect(detections.length).toEqual(6)
expectFaceDetections(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
})
it('inputSize md, finds all faces', async () => {
const detections = await tinyYolov2.locateFaces(imgEl, { inputSize: TinyYolov2SizeType.MD })
const expectedScores = [0.89, 0.82, 0.82, 0.72, 0.81, 0.86]
const maxBoxDelta = 24
expect(detections.length).toEqual(6)
expectFaceDetections(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
})
it('inputSize custom, finds all faces', async () => {
const detections = await tinyYolov2.locateFaces(imgEl, { inputSize: 416 })
const expectedScores = [0.89, 0.82, 0.82, 0.72, 0.81, 0.86]
const maxBoxDelta = 24
expect(detections.length).toEqual(6)
expectFaceDetections(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
})
it('no memory leaks', async () => {
await expectAllTensorsReleased(async () => {
const res = await fetch('base/weights_uncompressed/tiny_yolov2_model.weights')
const weights = new Float32Array(await res.arrayBuffer())
const net = createTinyYolov2(weights, false)
net.dispose()
})
})
})
})
\ No newline at end of file
import { TinyYolov2Types } from 'tfjs-tiny-yolov2';
import { TinyYolov2SizeType } from 'tfjs-tiny-yolov2';
import { bufferToImage, createTinyYolov2, TinyYolov2 } from '../../../src';
import { describeWithNets, expectAllTensorsReleased, expectRectClose } from '../../utils';
import { expectedTinyYolov2SeparableConvBoxes, expectDetectionResults, expectedTinyYolov2Boxes } from './expectedResults';
import { createTinyYolov2, fetchImage, TinyYolov2 } from '../../../src';
import { expectFaceDetections } from '../../expectFaceDetections';
import { describeWithNets, expectAllTensorsReleased } from '../../utils';
import { expectedTinyYolov2Boxes } from './expectedBoxes';
describe('tinyYolov2, with separable convolutions', () => {
xdescribe('tinyYolov2.locateFaces, with separable convolutions', () => {
let imgEl: HTMLImageElement
beforeAll(async () => {
const img = await (await fetch('base/test/images/faces.jpg')).blob()
imgEl = await bufferToImage(img)
imgEl = await fetchImage('base/test/images/faces.jpg')
})
describeWithNets('quantized weights', { withTinyYolov2: { quantized: true } }, ({ tinyYolov2 }) => {
it('inputSize lg, finds all faces', async () => {
const detections = await tinyYolov2.locateFaces(imgEl, { inputSize: TinyYolov2Types.SizeType.LG })
const detections = await tinyYolov2.locateFaces(imgEl, { inputSize: TinyYolov2SizeType.LG })
const expectedScores = [0.85, 0.88, 0.9, 0.85, 0.9, 0.85]
const maxBoxDelta = 25
expect(detections.length).toEqual(6)
expectDetectionResults(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
expectFaceDetections(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
})
it('inputSize md, finds all faces', async () => {
const detections = await tinyYolov2.locateFaces(imgEl, { inputSize: TinyYolov2Types.SizeType.MD })
const detections = await tinyYolov2.locateFaces(imgEl, { inputSize: TinyYolov2SizeType.MD })
const expectedScores = [0.85, 0.8, 0.8, 0.85, 0.85, 0.83]
const maxBoxDelta = 34
expect(detections.length).toEqual(6)
expectDetectionResults(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
expectFaceDetections(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
})
it('inputSize custom, finds all faces', async () => {
......@@ -42,7 +42,7 @@ describe('tinyYolov2, with separable convolutions', () => {
const maxBoxDelta = 34
expect(detections.length).toEqual(6)
expectDetectionResults(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
expectFaceDetections(detections, expectedTinyYolov2Boxes, expectedScores, maxBoxDelta)
})
})
......
import * as tf from '@tensorflow/tfjs-core';
import { FaceDetectionNet, FaceRecognitionNet, IPoint, IRect, Mtcnn, NeuralNetwork, TinyYolov2 } from '../src/';
import { allFacesMtcnnFactory, allFacesSsdMobilenetv1Factory, allFacesTinyYolov2Factory } from '../src/allFacesFactory';
import * as faceapi from '../src';
import { FaceRecognitionNet, IPoint, IRect, Mtcnn, NeuralNetwork, TinyYolov2 } from '../src/';
import { FaceDetection } from '../src/classes/FaceDetection';
import { FaceLandmarks } from '../src/classes/FaceLandmarks';
import { FullFaceDescription } from '../src/classes/FullFaceDescription';
import { FaceLandmark68Net } from '../src/faceLandmarkNet/FaceLandmark68Net';
import { FaceLandmark68TinyNet } from '../src/faceLandmarkNet/FaceLandmark68TinyNet';
import { allFacesMtcnnFunction, allFacesSsdMobilenetv1Function, allFacesTinyYolov2Function } from '../src/globalApi';
import { SsdMobilenetv1 } from '../src/ssdMobilenetv1/SsdMobilenetv1';
import { TinyFaceDetector } from '../src/tinyFaceDetector/TinyFaceDetector';
jasmine.DEFAULT_TIMEOUT_INTERVAL = 60000
......@@ -59,16 +59,19 @@ export function sortFaceDetections(boxes: FaceDetection[]) {
}
export function sortLandmarks(landmarks: FaceLandmarks[]) {
return sortByDistanceToOrigin(landmarks, l => l.getPositions()[0])
return sortByDistanceToOrigin(landmarks, l => l.positions[0])
}
export function sortFullFaceDescriptions(descs: FullFaceDescription[]) {
export function sortByFaceDetection<T extends { detection: FaceDetection }>(descs: T[]) {
return sortByDistanceToOrigin(descs, d => d.detection.box)
}
export type ExpectedFullFaceDescription = {
export type ExpectedFaceDetectionWithLandmarks = {
detection: IRect
landmarks: IPoint[]
}
export type ExpectedFullFaceDescription = ExpectedFaceDetectionWithLandmarks & {
descriptor: Float32Array
}
......@@ -95,10 +98,8 @@ export type WithTinyYolov2Options = WithNetOptions & {
}
export type InjectNetArgs = {
allFacesSsdMobilenetv1: allFacesSsdMobilenetv1Function
allFacesTinyYolov2: allFacesTinyYolov2Function
allFacesMtcnn: allFacesMtcnnFunction
faceDetectionNet: FaceDetectionNet
ssdMobilenetv1: SsdMobilenetv1
tinyFaceDetector: TinyFaceDetector
faceLandmark68Net: FaceLandmark68Net
faceLandmark68TinyNet: FaceLandmark68TinyNet
faceRecognitionNet: FaceRecognitionNet
......@@ -109,9 +110,11 @@ export type InjectNetArgs = {
export type DescribeWithNetsOptions = {
withAllFacesSsdMobilenetv1?: boolean
withAllFacesTinyFaceDetector?: boolean
withAllFacesTinyYolov2?: boolean
withAllFacesMtcnn?: boolean
withFaceDetectionNet?: WithNetOptions
withSsdMobilenetv1?: WithNetOptions
withTinyFaceDetector?: WithNetOptions
withFaceLandmark68Net?: WithNetOptions
withFaceLandmark68TinyNet?: WithNetOptions
withFaceRecognitionNet?: WithNetOptions
......@@ -128,11 +131,10 @@ async function initNet<TNet extends NeuralNetwork<any>>(
uncompressedFilename: string | boolean,
isUnusedModel: boolean = false
) {
await net.load(
uncompressedFilename
const url = uncompressedFilename
? await loadNetWeights(`base/weights_uncompressed/${uncompressedFilename}`)
: (isUnusedModel ? 'base/weights_unused' : 'base/weights')
)
await net.load(url)
}
export function describeWithNets(
......@@ -142,22 +144,24 @@ export function describeWithNets(
) {
describe(description, () => {
let faceDetectionNet: FaceDetectionNet = new FaceDetectionNet()
let faceLandmark68Net: FaceLandmark68Net = new FaceLandmark68Net()
let faceLandmark68TinyNet: FaceLandmark68TinyNet = new FaceLandmark68TinyNet()
let faceRecognitionNet: FaceRecognitionNet = new FaceRecognitionNet()
let mtcnn: Mtcnn = new Mtcnn()
let tinyYolov2: TinyYolov2 = new TinyYolov2(options.withTinyYolov2 && options.withTinyYolov2.withSeparableConv)
let allFacesSsdMobilenetv1 = allFacesSsdMobilenetv1Factory(faceDetectionNet, faceLandmark68Net, faceRecognitionNet)
let allFacesTinyYolov2 = allFacesTinyYolov2Factory(tinyYolov2, faceLandmark68Net, faceRecognitionNet)
let allFacesMtcnn = allFacesMtcnnFactory(mtcnn, faceRecognitionNet)
const {
ssdMobilenetv1,
tinyFaceDetector,
faceLandmark68Net,
faceLandmark68TinyNet,
faceRecognitionNet,
mtcnn,
tinyYolov2
} = faceapi.nets
beforeAll(async () => {
const {
withAllFacesSsdMobilenetv1,
withAllFacesTinyFaceDetector,
withAllFacesTinyYolov2,
withAllFacesMtcnn,
withFaceDetectionNet,
withSsdMobilenetv1,
withTinyFaceDetector,
withFaceLandmark68Net,
withFaceLandmark68TinyNet,
withFaceRecognitionNet,
......@@ -165,14 +169,21 @@ export function describeWithNets(
withTinyYolov2
} = options
if (withFaceDetectionNet || withAllFacesSsdMobilenetv1) {
await initNet<FaceDetectionNet>(
faceDetectionNet,
!!withFaceDetectionNet && !withFaceDetectionNet.quantized && 'ssd_mobilenetv1_model.weights'
if (withSsdMobilenetv1 || withAllFacesSsdMobilenetv1) {
await initNet<SsdMobilenetv1>(
ssdMobilenetv1,
!!withSsdMobilenetv1 && !withSsdMobilenetv1.quantized && 'ssd_mobilenetv1_model.weights'
)
}
if (withTinyFaceDetector || withAllFacesTinyFaceDetector) {
await initNet<TinyFaceDetector>(
tinyFaceDetector,
!!withTinyFaceDetector && !withTinyFaceDetector.quantized && 'tiny_face_detector_model.weights'
)
}
if (withFaceLandmark68Net || withAllFacesSsdMobilenetv1 || withAllFacesTinyYolov2) {
if (withFaceLandmark68Net || withAllFacesSsdMobilenetv1 || withAllFacesTinyFaceDetector|| withAllFacesMtcnn || withAllFacesTinyYolov2) {
await initNet<FaceLandmark68Net>(
faceLandmark68Net,
!!withFaceLandmark68Net && !withFaceLandmark68Net.quantized && 'face_landmark_68_model.weights'
......@@ -186,10 +197,11 @@ export function describeWithNets(
)
}
if (withFaceRecognitionNet || withAllFacesSsdMobilenetv1 || withAllFacesMtcnn || withAllFacesTinyYolov2) {
if (withFaceRecognitionNet || withAllFacesSsdMobilenetv1 || withAllFacesTinyFaceDetector|| withAllFacesMtcnn || withAllFacesTinyYolov2) {
await initNet<FaceRecognitionNet>(
faceRecognitionNet,
// TODO: figure out why quantized weights results in NaNs in testcases
// apparently (net weight values differ when loading with karma)
'face_recognition_model.weights'
)
}
......@@ -205,24 +217,23 @@ export function describeWithNets(
await initNet<TinyYolov2>(
tinyYolov2,
!!withTinyYolov2 && !withTinyYolov2.quantized && 'tiny_yolov2_model.weights',
withTinyYolov2 && withTinyYolov2.withSeparableConv === false
true
)
}
})
afterAll(() => {
faceDetectionNet && faceDetectionNet.dispose()
faceLandmark68Net && faceLandmark68Net.dispose()
faceRecognitionNet && faceRecognitionNet.dispose()
mtcnn && mtcnn.dispose(),
tinyYolov2 && tinyYolov2.dispose()
ssdMobilenetv1.isLoaded && ssdMobilenetv1.dispose()
faceLandmark68Net.isLoaded && faceLandmark68Net.dispose()
faceRecognitionNet.isLoaded && faceRecognitionNet.dispose()
mtcnn.isLoaded && mtcnn.dispose()
tinyFaceDetector.isLoaded && tinyFaceDetector.dispose()
tinyYolov2.isLoaded && tinyYolov2.dispose()
})
specDefinitions({
allFacesSsdMobilenetv1,
allFacesTinyYolov2,
allFacesMtcnn,
faceDetectionNet,
ssdMobilenetv1,
tinyFaceDetector,
faceLandmark68Net,
faceLandmark68TinyNet,
faceRecognitionNet,
......
......@@ -9,8 +9,8 @@
<script>
tf = faceapi.tf
const uncompressedWeightsUri = `face_landmark_68_model.weights`
const net = new faceapi.FaceLandmark68LargeNet()
const uncompressedWeightsUri = `tiny_face_detector_model.weights`
const net = new faceapi.TinyFaceDetector()
async function load() {
await net.load(new Float32Array(await (await fetch(uncompressedWeightsUri)).arrayBuffer()))
......@@ -21,7 +21,7 @@
return net.getParamList().map(({ path, tensor }) => ({ name: path, tensor }))
}
const modelName = 'face_landmark_68'
const modelName = 'tiny_face_detector'
function makeShards(weightArray) {
const maxLength = 4096 * 1024
......
const path = require('path')
const fs = require('fs')
const excludes = [
{ dir: 'faceLandmarkNet', exceptions: ['index.ts', 'FaceLandmark68Net.ts', 'FaceLandmark68TinyNet.ts'] },
{ dir: 'faceRecognitionNet', exceptions: ['index.ts', 'FaceRecognitionNet.ts'] },
{ dir: 'mtcnn', exceptions: ['index.ts', 'Mtcnn.ts', 'MtcnnOptions.ts'] },
{ dir: 'ssdMobilenetv1', exceptions: ['index.ts', 'SsdMobilenetv1.ts', 'SsdMobilenetv1Options.ts'] },
{ dir: 'tinyFaceDetector', exceptions: ['index.ts', 'TinyFaceDetector.ts', 'TinyFaceDetectorOptions.ts'] },
{ dir: 'tinyYolov2', exceptions: ['index.ts', 'TinyYolov2.ts'] }
]
const exclude = excludes.map(({ dir, exceptions }) => {
const files = fs.readdirSync(path.resolve('src', dir))
.filter(file => !exceptions.some(ex => ex === file))
return files.map(file => `**/${dir}/${file}`)
}).reduce((flat, arr) => flat.concat(arr), [])
module.exports = {
mode: 'file',
out: 'docs',
module: 'commonjs',
target: 'es5',
theme: 'default',
excludeExternals: true,
includeDeclarations: true,
excludePrivate: true,
excludeNotExported: true,
stripInternal: true,
externalPattern: 'node_modules/@tensorflow',
exclude
}
\ No newline at end of file
[{"weights":[{"name":"conv0/filters","shape":[3,3,3,16],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.009007044399485869,"min":-1.2069439495311063}},{"name":"conv0/bias","shape":[16],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.005263455241334205,"min":-0.9211046672334858}},{"name":"conv1/depthwise_filter","shape":[3,3,16,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.004001977630690033,"min":-0.5042491814669441}},{"name":"conv1/pointwise_filter","shape":[1,1,16,32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.013836609615999109,"min":-1.411334180831909}},{"name":"conv1/bias","shape":[32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0015159862590771096,"min":-0.30926119685173037}},{"name":"conv2/depthwise_filter","shape":[3,3,32,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.002666276225856706,"min":-0.317286870876948}},{"name":"conv2/pointwise_filter","shape":[1,1,32,64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.015265831292844286,"min":-1.6792414422128714}},{"name":"conv2/bias","shape":[64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0020280554598453,"min":-0.37113414915168985}},{"name":"conv3/depthwise_filter","shape":[3,3,64,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.006100742489683862,"min":-0.8907084034938438}},{"name":"conv3/pointwise_filter","shape":[1,1,64,128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.016276211832083907,"min":-2.0508026908425725}},{"name":"conv3/bias","shape":[128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.003394414279975143,"min":-0.7637432129944072}},{"name":"conv4/depthwise_filter","shape":[3,3,128,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.006716050119961009,"min":-0.8059260143953211}},{"name":"conv4/pointwise_filter","shape":[1,1,128,256],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.021875603993733724,"min":-2.8875797271728514}},{"name":"conv4/bias","shape":[256],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0041141652009066415,"min":-0.8187188749804216}},{"name":"conv5/depthwise_filter","shape":[3,3,256,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.008423839597141042,"min":-0.9013508368940915}},{"name":"conv5/pointwise_filter","shape":[1,1,256,512],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.030007277283014035,"min":-3.8709387695088107}},{"name":"conv5/bias","shape":[512],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.008402082966823203,"min":-1.4871686851277068}},{"name":"conv8/filters","shape":[1,1,512,25],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.028336129469030042,"min":-4.675461362389957}},{"name":"conv8/bias","shape":[25],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.002268134028303857,"min":-0.41053225912299807}}],"paths":["tiny_face_detector_model-shard1"]}]
\ No newline at end of file
[{"weights":[{"name":"conv0/depthwise_filter","shape":[3,3,3,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.004699238725737029,"min":-0.7471789573921876}},{"name":"conv0/pointwise_filter","shape":[1,1,3,16],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.008118405529097015,"min":-1.071629529840806}},{"name":"conv0/bias","shape":[16],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0024678509609372006,"min":-0.28873856242965246}},{"name":"conv1/depthwise_filter","shape":[3,3,16,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.004553892331964829,"min":-0.5737904338275684}},{"name":"conv1/pointwise_filter","shape":[1,1,16,32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.00980057996862075,"min":-1.3230782957638012}},{"name":"conv1/bias","shape":[32],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0011220066278588537,"min":-0.20644921952602907}},{"name":"conv2/depthwise_filter","shape":[3,3,32,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0032098570290733787,"min":-0.38839270051787883}},{"name":"conv2/pointwise_filter","shape":[1,1,32,64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.008682825051101984,"min":-1.154815731796564}},{"name":"conv2/bias","shape":[64],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0015120926440930834,"min":-0.21471715546121783}},{"name":"conv3/depthwise_filter","shape":[3,3,64,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.003597520496331009,"min":-0.4317024595597211}},{"name":"conv3/pointwise_filter","shape":[1,1,64,128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.010341314240997913,"min":-1.3650534798117246}},{"name":"conv3/bias","shape":[128],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.002109630785736383,"min":-0.4113780032185947}},{"name":"conv4/depthwise_filter","shape":[3,3,128,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.004783747476689955,"min":-0.6171034244930043}},{"name":"conv4/pointwise_filter","shape":[1,1,128,256],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.009566552498761345,"min":-1.2627849298364977}},{"name":"conv4/bias","shape":[256],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0020002245903015135,"min":-0.3860433459281921}},{"name":"conv5/depthwise_filter","shape":[3,3,256,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.004355777244941861,"min":-0.4791354969436047}},{"name":"conv5/pointwise_filter","shape":[1,1,256,512],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.010036561068366555,"min":-1.2545701335458193}},{"name":"conv5/bias","shape":[512],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0023248311935686597,"min":-0.42776893961663337}},{"name":"conv6/depthwise_filter","shape":[3,3,512,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.004659063442080629,"min":-0.5963601205863205}},{"name":"conv6/pointwise_filter","shape":[1,1,512,1024],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.010061494509379069,"min":-1.2576868136723836}},{"name":"conv6/bias","shape":[1024],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.0029680932269376867,"min":-0.3947563991827123}},{"name":"conv7/depthwise_filter","shape":[3,3,1024,1],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.003887363508635876,"min":-0.48980780208812036}},{"name":"conv7/pointwise_filter","shape":[1,1,1024,1024],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.009973861189449535,"min":-1.2766542322495404}},{"name":"conv7/bias","shape":[1024],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.004667898486642276,"min":-0.6955168745096991}},{"name":"conv8/filters","shape":[1,1,1024,25],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.01573958116419175,"min":-2.5340725674348716}},{"name":"conv8/bias","shape":[25],"dtype":"float32","quantization":{"dtype":"uint8","scale":0.009396760662396749,"min":-2.2552225589752197}}],"paths":["tiny_yolov2_separable_conv_model-shard1"]}]
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment