Unverified Commit 19e27a14 by justadudewhohacks Committed by GitHub

Merge pull request #132 from justadudewhohacks/nodejs

nodejs support
parents 50576405 12981b61
......@@ -6,3 +6,4 @@ proto
weights_uncompressed
weights_unused
docs
out
\ No newline at end of file
......@@ -4,16 +4,21 @@ node_js:
- "node"
- "10"
- "8"
- "6"
# node 6 is not compatible with tfjs-node
# - "6"
env:
global:
- BACKEND_CPU=true EXCLUDE_UNCOMPRESSED=true
matrix:
- ENV=browser
- ENV=node
addons:
chrome: stable
install: npm install
before_install:
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
- sleep 3 # give xvfb some time to start
script:
- npm run test-travis
- if [ $ENV == 'browser' ]; then npm run test-browser; fi
- if [ $ENV == 'node' ]; then npm run test-node; fi
- npm run build
\ No newline at end of file
......@@ -16,6 +16,9 @@ Table of Contents:
* **[Face Detection Models](#models-face-detection)**
* **[68 Point Face Landmark Detection Models](#models-face-landmark-detection)**
* **[Face Recognition Model](#models-face-recognition)**
* **[Getting Started](#getting-started)**
* **[face-api.js for the Browser](#getting-started-browser)**
* **[face-api.js for Nodejs](#getting-started-nodejs)**
* **[Usage](#usage)**
* **[Loading the Models](#usage-loading-models)**
* **[High Level API](#usage-high-level-api)**
......@@ -75,15 +78,42 @@ Check out my face-api.js tutorials:
## Running the Examples
Clone the repository:
``` bash
git clone https://github.com/justadudewhohacks/face-api.js.git
cd face-api.js/examples
```
### Running the Browser Examples
``` bash
cd face-api.js/examples/examples-browser
npm i
npm start
```
Browse to http://localhost:3000/.
### Running the Nodejs Examples
``` bash
cd face-api.js/examples/examples-nodejs
npm i
```
Now run one of the examples using ts-node:
``` bash
ts-node faceDetection.ts
```
Or simply compile and run them with node:
``` bash
tsc faceDetection.ts
node faceDetection.js
```
<a name="models"></a>
# Available Models
......@@ -130,6 +160,55 @@ The neural net is equivalent to the **FaceRecognizerNet** used in [face-recognit
The size of the quantized model is roughly 6.2 MB (**face_recognition_model**).
<a name="getting-started"></a>
# Getting Started
<a name="getting-started-browser"></a>
## face-api.js for the Browser
Simply include the latest script from [dist/face-api.js](https://github.com/justadudewhohacks/face-api.js/tree/master/dist).
Or install it via npm:
``` bash
npm i face-api.js
```
<a name="getting-started-nodejs"></a>
## face-api.js for Nodejs
We can use the equivalent API in a nodejs environment by polyfilling some browser specifics, such as HTMLImageElement, HTMLCanvasElement and ImageData. The easiest way to do so is by installing the node-canvas package.
Alternatively you can simply construct your own tensors from image data and pass tensors as inputs to the API.
Furthermore you want to install @tensorflow/tfjs-node (not required, but highly recommended), which speeds things up drastically by compiling and binding to the native Tensorflow C++ library:
``` bash
npm i face-api.js canvas @tensorflow/tfjs-node
```
Now we simply monkey patch the environment to use the polyfills:
``` javascript
// import nodejs bindings to native tensorflow,
// not required, but will speed up things drastically (python required)
import '@tensorflow/tfjs-node';
// implements nodejs wrappers for HTMLCanvasElement, HTMLImageElement, ImageData
import * as canvas from 'canvas';
import * as faceapi from 'face-api.js';
// patch nodejs environment, we need to provide an implementation of
// HTMLCanvasElement and HTMLImageElement, additionally an implementation
// of ImageData is required, in case you want to use the MTCNN
const { Canvas, Image, ImageData } = canvas
faceapi.env.monkeyPatch({ Canvas, Image, ImageData })
```
# Usage
<a name="usage-loading-models"></a>
......@@ -150,14 +229,38 @@ await faceapi.loadSsdMobilenetv1Model('/models')
// await faceapi.loadFaceRecognitionModel('/models')
```
Alternatively, you can also create instance of the neural nets:
All global neural network instances are exported via faceapi.nets:
``` javascript
console.log(faceapi.nets)
```
The following is equivalent to `await faceapi.loadSsdMobilenetv1Model('/models')`:
``` javascript
await faceapi.nets.ssdMobilenetv1.loadFromUri('/models')
```
In a nodejs environment you can furthermore load the models directly from disk:
``` javascript
await faceapi.nets.ssdMobilenetv1.loadFromDisk('./models')
```
You can also load the model from a tf.NamedTensorMap:
``` javascript
await faceapi.nets.ssdMobilenetv1.loadFromWeightMap(weightMap)
```
Alternatively, you can also create own instances of the neural nets:
``` javascript
const net = new faceapi.SsdMobilenetv1()
await net.load('/models')
```
Using instances, you can also load the weights as a Float32Array (in case you want to use the uncompressed models):
You can also load the weights as a Float32Array (in case you want to use the uncompressed models):
``` javascript
// using fetch
......@@ -205,7 +308,7 @@ By default **detectAllFaces** and **detectSingleFace** utilize the SSD Mobilenet
``` javascript
const detections1 = await faceapi.detectAllFaces(input, new faceapi.SsdMobilenetv1Options())
const detections2 = await faceapi.detectAllFaces(input, new faceapi.inyFaceDetectorOptions())
const detections2 = await faceapi.detectAllFaces(input, new faceapi.TinyFaceDetectorOptions())
const detections3 = await faceapi.detectAllFaces(input, new faceapi.MtcnnOptions())
```
......@@ -513,12 +616,6 @@ const landmarks2 = await faceapi.detectFaceLandmarksTiny(faceImage)
const descriptor = await faceapi.computeFaceDescriptor(alignedFaceImage)
```
All global neural network instances are exported via faceapi.nets:
``` javascript
console.log(faceapi.nets)
```
### Extracting a Canvas for an Image Region
``` javascript
......
const classes = ['amy', 'bernadette', 'howard', 'leonard', 'penny', 'raj', 'sheldon', 'stuart']
function getFaceImageUri(className, idx) {
return `images/${className}/${className}${idx}.png`
return `${className}/${className}${idx}.png`
}
function renderFaceImageSelectList(selectListId, onChange, initialValue) {
......
function getImageUri(imageName) {
return `images/${imageName}`
}
async function requestExternalImage(imageUrl) {
const res = await fetch('fetch_external_image', {
method: 'post',
......
......@@ -17,7 +17,7 @@ function renderImageSelectList(selectListId, onChange, initialValue) {
renderOption(
select,
imageName,
getImageUri(imageName)
imageName
)
)
}
......@@ -25,7 +25,7 @@ function renderImageSelectList(selectListId, onChange, initialValue) {
renderSelectList(
selectListId,
onChange,
getImageUri(initialValue),
initialValue,
renderChildren
)
}
......
......@@ -10,10 +10,10 @@ app.use(express.urlencoded({ extended: true }))
const viewsDir = path.join(__dirname, 'views')
app.use(express.static(viewsDir))
app.use(express.static(path.join(__dirname, './public')))
app.use(express.static(path.join(__dirname, '../weights')))
app.use(express.static(path.join(__dirname, '../weights_uncompressed')))
app.use(express.static(path.join(__dirname, '../dist')))
app.use(express.static(path.join(__dirname, './node_modules/axios/dist')))
app.use(express.static(path.join(__dirname, '../images')))
app.use(express.static(path.join(__dirname, '../media')))
app.use(express.static(path.join(__dirname, '../../weights')))
app.use(express.static(path.join(__dirname, '../../dist')))
app.get('/', (req, res) => res.redirect('/face_and_landmark_detection'))
app.get('/face_and_landmark_detection', (req, res) => res.sendFile(path.join(viewsDir, 'faceAndLandmarkDetection.html')))
......
......@@ -18,7 +18,7 @@
<div class="indeterminate"></div>
</div>
<div style="position: relative" class="margin">
<video src="media/bbt.mp4" id="inputVideo" autoplay muted loop></video>
<video src="bbt.mp4" id="inputVideo" autoplay muted loop></video>
<canvas id="overlay" />
</div>
......
// import nodejs bindings to native tensorflow,
// not required, but will speed up things drastically (python required)
import '@tensorflow/tfjs-node';
// implements nodejs wrappers for HTMLCanvasElement, HTMLImageElement, ImageData
const canvas = require('canvas')
import * as faceapi from '../../../src';
// patch nodejs environment, we need to provide an implementation of
// HTMLCanvasElement and HTMLImageElement, additionally an implementation
// of ImageData is required, in case you want to use the MTCNN
const { Canvas, Image, ImageData } = canvas
faceapi.env.monkeyPatch({ Canvas, Image, ImageData })
export { canvas, faceapi }
\ No newline at end of file
import { NeuralNetwork } from 'tfjs-image-recognition-base';
import { faceapi } from './env';
export const faceDetectionNet = faceapi.nets.ssdMobilenetv1
// export const faceDetectionNet = tinyFaceDetector
// export const faceDetectionNet = mtcnn
// SsdMobilenetv1Options
const minConfidence = 0.5
// TinyFaceDetectorOptions
const inputSize = 408
const scoreThreshold = 0.5
// MtcnnOptions
const minFaceSize = 50
const scaleFactor = 0.8
function getFaceDetectorOptions(net: NeuralNetwork<any>) {
return net === faceapi.nets.ssdMobilenetv1
? new faceapi.SsdMobilenetv1Options({ minConfidence })
: (net === faceapi.nets.tinyFaceDetector
? new faceapi.TinyFaceDetectorOptions({ inputSize, scoreThreshold })
: new faceapi.MtcnnOptions({ minFaceSize, scaleFactor })
)
}
export const faceDetectionOptions = getFaceDetectorOptions(faceDetectionNet)
\ No newline at end of file
export { canvas, faceapi } from './env';
export { faceDetectionNet, faceDetectionOptions } from './faceDetection';
export { saveFile } from './saveFile';
\ No newline at end of file
import * as fs from 'fs';
import * as path from 'path';
const baseDir = path.resolve(__dirname, '../out')
export function saveFile(fileName: string, buf: Buffer) {
if (!fs.existsSync(baseDir)) {
fs.mkdirSync(baseDir)
}
fs.writeFileSync(path.resolve(baseDir, fileName), buf)
}
\ No newline at end of file
import { canvas, faceapi, faceDetectionNet, faceDetectionOptions, saveFile } from './commons';
async function run() {
await faceDetectionNet.loadFromDisk('../../weights')
const img = await canvas.loadImage('../images/bbt1.jpg')
const detections = await faceapi.detectAllFaces(img, faceDetectionOptions)
const out = faceapi.createCanvasFromMedia(img) as any
faceapi.drawDetection(out, detections)
saveFile('faceDetection.jpg', out.toBuffer('image/jpeg'))
}
run()
\ No newline at end of file
import { canvas, faceapi, faceDetectionNet, faceDetectionOptions, saveFile } from './commons';
async function run() {
await faceDetectionNet.loadFromDisk('../../weights')
await faceapi.nets.faceLandmark68Net.loadFromDisk('../../weights')
const img = await canvas.loadImage('../images/bbt1.jpg')
const results = await faceapi.detectAllFaces(img, faceDetectionOptions)
.withFaceLandmarks()
const out = faceapi.createCanvasFromMedia(img) as any
faceapi.drawDetection(out, results.map(res => res.detection))
faceapi.drawLandmarks(out, results.map(res => res.faceLandmarks), { drawLines: true, color: 'red' })
saveFile('faceLandmarkDetection.jpg', out.toBuffer('image/jpeg'))
}
run()
\ No newline at end of file
import { canvas, faceapi, faceDetectionNet, faceDetectionOptions, saveFile } from './commons';
const REFERENCE_IMAGE = '../images/bbt1.jpg'
const QUERY_IMAGE = '../images/bbt4.jpg'
async function run() {
await faceDetectionNet.loadFromDisk('../../weights')
await faceapi.nets.faceLandmark68Net.loadFromDisk('../../weights')
await faceapi.nets.faceRecognitionNet.loadFromDisk('../../weights')
const referenceImage = await canvas.loadImage(REFERENCE_IMAGE)
const queryImage = await canvas.loadImage(QUERY_IMAGE)
const resultsRef = await faceapi.detectAllFaces(referenceImage, faceDetectionOptions)
.withFaceLandmarks()
.withFaceDescriptors()
const resultsQuery = await faceapi.detectAllFaces(queryImage, faceDetectionOptions)
.withFaceLandmarks()
.withFaceDescriptors()
const faceMatcher = new faceapi.FaceMatcher(resultsRef)
const labels = faceMatcher.labeledDescriptors
.map(ld => ld.label)
const refBoxesWithText = resultsRef
.map(res => res.detection.box)
.map((box, i) => new faceapi.BoxWithText(box, labels[i]))
const outRef = faceapi.createCanvasFromMedia(referenceImage) as any
faceapi.drawDetection(outRef, refBoxesWithText)
saveFile('referenceImage.jpg', outRef.toBuffer('image/jpeg'))
const queryBoxesWithText = resultsQuery.map(res => {
const bestMatch = faceMatcher.findBestMatch(res.descriptor)
return new faceapi.BoxWithText(res.detection.box, bestMatch.toString())
})
const outQuery = faceapi.createCanvasFromMedia(queryImage) as any
faceapi.drawDetection(outQuery, queryBoxesWithText)
saveFile('queryImage.jpg', outQuery.toBuffer('image/jpeg'))
}
run()
\ No newline at end of file
{
"author": "justadudewhohacks",
"license": "MIT",
"dependencies": {
"@tensorflow/tfjs-node": "^0.1.19",
"canvas": "^2.0.1"
}
}
let spec_files = ['**/*.test.ts'].concat(
process.env.EXCLUDE_UNCOMPRESSED
? ['!**/*.uncompressed.test.ts']
: []
)
// exclude browser tests
spec_files = spec_files.concat(['!**/*.browser.test.ts'])
module.exports = {
spec_dir: 'test',
spec_files,
random: false
}
\ No newline at end of file
......@@ -22,10 +22,9 @@ let exclude = (
'faceRecognitionNet',
'ssdMobilenetv1',
'tinyFaceDetector',
'mtcnn',
'tinyYolov2'
'mtcnn'
]
: ['tinyYolov2']
: []
)
.filter(ex => ex !== process.env.UUT)
.map(ex => `test/tests/${ex}/*.ts`)
......@@ -37,6 +36,10 @@ exclude = exclude.concat(
: []
)
// exclude nodejs tests
exclude = exclude.concat(['**/*.node.test.ts'])
module.exports = function(config) {
const args = []
if (process.env.BACKEND_CPU) {
......
......@@ -12,15 +12,17 @@
"tsc-es6": "tsc --p tsconfig.es6.json",
"build": "rm -rf ./build && rm -rf ./dist && npm run rollup && npm run rollup-min && npm run tsc && npm run tsc-es6",
"test": "karma start",
"test-browser": "karma start --single-run",
"test-node": "ts-node node_modules/jasmine/bin/jasmine --config=jasmine-node.js",
"test-all": "npm run test-browser && npm run test-node",
"test-facelandmarknets": "set UUT=faceLandmarkNet&& karma start",
"test-facerecognitionnet": "set UUT=faceRecognitionNet&& karma start",
"test-ssdmobilenetv1": "set UUT=ssdMobilenetv1&& karma start",
"test-tinyfacedetector": "set UUT=tinyFaceDetector&& karma start",
"test-mtcnn": "set UUT=mtcnn&& karma start",
"test-tinyyolov2": "set UUT=tinyYolov2&& karma start",
"test-cpu": "set BACKEND_CPU=true&& karma start",
"test-exclude-uncompressed": "set EXCLUDE_UNCOMPRESSED=true&& karma start",
"test-travis": "karma start --single-run",
"test-node-exclude-uncompressed": "set EXCLUDE_UNCOMPRESSED=true&& ts-node node_modules/jasmine/bin/jasmine --config=jasmine-node.js",
"docs": "typedoc --options ./typedoc.config.js ./src"
},
"keywords": [
......@@ -33,14 +35,17 @@
"author": "justadudewhohacks",
"license": "MIT",
"dependencies": {
"@tensorflow/tfjs-core": "^0.13.2",
"tfjs-image-recognition-base": "^0.1.3",
"tfjs-tiny-yolov2": "^0.2.1",
"@tensorflow/tfjs-core": "0.13.8",
"tfjs-image-recognition-base": "0.2.0",
"tfjs-tiny-yolov2": "0.3.0",
"tslib": "^1.9.3"
},
"devDependencies": {
"@tensorflow/tfjs-node": "^0.1.19",
"@types/jasmine": "^2.8.8",
"@types/node": "^10.9.2",
"canvas": "^2.0.1",
"jasmine": "^3.3.0",
"jasmine-core": "^3.2.1",
"karma": "^3.0.0",
"karma-chrome-launcher": "^2.2.0",
......@@ -51,6 +56,7 @@
"rollup-plugin-node-resolve": "^3.3.0",
"rollup-plugin-typescript2": "^0.16.1",
"rollup-plugin-uglify": "^4.0.0",
"ts-node": "^7.0.1",
"typescript": "2.8.4"
}
}
import { getContext2dOrThrow, getDefaultDrawOptions, resolveInput } from 'tfjs-image-recognition-base';
import { env, getContext2dOrThrow, getDefaultDrawOptions, resolveInput } from 'tfjs-image-recognition-base';
import { FaceLandmarks } from '../classes/FaceLandmarks';
import { FaceLandmarks68 } from '../classes/FaceLandmarks68';
......@@ -11,7 +11,7 @@ export function drawLandmarks(
options?: DrawLandmarksOptions
) {
const canvas = resolveInput(canvasArg)
if (!(canvas instanceof HTMLCanvasElement)) {
if (!(canvas instanceof env.getEnv().Canvas)) {
throw new Error('drawLandmarks - expected canvas to be of type: HTMLCanvasElement')
}
......
import * as tf from '@tensorflow/tfjs-core';
import { isTensor4D, Rect } from 'tfjs-image-recognition-base';
import { isTensor4D, Rect, isTensor3D } from 'tfjs-image-recognition-base';
import { FaceDetection } from '../classes/FaceDetection';
......@@ -18,6 +18,10 @@ export async function extractFaceTensors(
detections: Array<FaceDetection | Rect>
): Promise<tf.Tensor3D[]> {
if (!isTensor3D(imageTensor) && !isTensor4D(imageTensor)) {
throw new Error('extractFaceTensors - expected image tensor to be 3D or 4D')
}
if (isTensor4D(imageTensor) && imageTensor.shape[0] > 1) {
throw new Error('extractFaceTensors - batchSize > 1 not supported')
}
......
import {
createCanvas,
env,
getContext2dOrThrow,
imageTensorToCanvas,
Rect,
......@@ -21,9 +22,11 @@ export async function extractFaces(
detections: Array<FaceDetection | Rect>
): Promise<HTMLCanvasElement[]> {
const { Canvas } = env.getEnv()
let canvas = input as HTMLCanvasElement
if (!(input instanceof HTMLCanvasElement)) {
if (!(input instanceof Canvas)) {
const netInput = await toNetInput(input)
if (netInput.batchSize > 1) {
......@@ -31,7 +34,7 @@ export async function extractFaces(
}
const tensorOrCanvas = netInput.getInput(0)
canvas = tensorOrCanvas instanceof HTMLCanvasElement
canvas = tensorOrCanvas instanceof Canvas
? tensorOrCanvas
: await imageTensorToCanvas(tensorOrCanvas)
}
......
......@@ -4,9 +4,9 @@ import { ConvParams, SeparableConvParams } from 'tfjs-tiny-yolov2';
import { depthwiseSeparableConv } from './depthwiseSeparableConv';
import { extractParams } from './extractParams';
import { extractParamsFromWeigthMap } from './extractParamsFromWeigthMap';
import { FaceLandmark68NetBase } from './FaceLandmark68NetBase';
import { fullyConnectedLayer } from './fullyConnectedLayer';
import { loadQuantizedParams } from './loadQuantizedParams';
import { DenseBlock4Params, NetParams } from './types';
function denseBlock(
......@@ -64,10 +64,13 @@ export class FaceLandmark68Net extends FaceLandmark68NetBase<NetParams> {
})
}
protected loadQuantizedParams(uri: string | undefined) {
return loadQuantizedParams(uri)
protected getDefaultModelName(): string {
return 'face_landmark_68_model'
}
protected extractParamsFromWeigthMap(weightMap: tf.NamedTensorMap) {
return extractParamsFromWeigthMap(weightMap)
}
protected extractParams(weights: Float32Array) {
return extractParams(weights)
......
......@@ -3,7 +3,7 @@ import { IDimensions, isEven, NetInput, NeuralNetwork, Point, TNetInput, toNetIn
import { FaceLandmarks68 } from '../classes/FaceLandmarks68';
export class FaceLandmark68NetBase<NetParams> extends NeuralNetwork<NetParams> {
export abstract class FaceLandmark68NetBase<NetParams> extends NeuralNetwork<NetParams> {
// TODO: make super.name protected
private __name: string
......@@ -13,9 +13,7 @@ export class FaceLandmark68NetBase<NetParams> extends NeuralNetwork<NetParams> {
this.__name = _name
}
public runNet(_: NetInput): tf.Tensor2D {
throw new Error(`${this.__name} - runNet not implemented`)
}
public abstract runNet(netInput: NetInput): tf.Tensor2D
public postProcess(output: tf.Tensor2D, inputSize: number, originalDimensions: IDimensions[]): tf.Tensor2D {
......
......@@ -6,7 +6,7 @@ import { depthwiseSeparableConv } from './depthwiseSeparableConv';
import { extractParamsTiny } from './extractParamsTiny';
import { FaceLandmark68NetBase } from './FaceLandmark68NetBase';
import { fullyConnectedLayer } from './fullyConnectedLayer';
import { loadQuantizedParamsTiny } from './loadQuantizedParamsTiny';
import { extractParamsFromWeigthMapTiny } from './extractParamsFromWeigthMapTiny';
import { DenseBlock3Params, TinyNetParams } from './types';
function denseBlock(
......@@ -60,8 +60,12 @@ export class FaceLandmark68TinyNet extends FaceLandmark68NetBase<TinyNetParams>
})
}
protected loadQuantizedParams(uri: string | undefined) {
return loadQuantizedParamsTiny(uri)
protected getDefaultModelName(): string {
return 'face_landmark_68_tiny_model'
}
protected extractParamsFromWeigthMap(weightMap: tf.NamedTensorMap) {
return extractParamsFromWeigthMapTiny(weightMap)
}
protected extractParams(weights: Float32Array) {
......
import { disposeUnusedWeightTensors, loadWeightMap, ParamMapping } from 'tfjs-image-recognition-base';
import * as tf from '@tensorflow/tfjs-core';
import { disposeUnusedWeightTensors, ParamMapping } from 'tfjs-image-recognition-base';
import { loadParamsFactory } from './loadParamsFactory';
import { NetParams } from './types';
const DEFAULT_MODEL_NAME = 'face_landmark_68_model'
export function extractParamsFromWeigthMap(
weightMap: tf.NamedTensorMap
): { params: NetParams, paramMappings: ParamMapping[] } {
export async function loadQuantizedParams(
uri: string | undefined
): Promise<{ params: NetParams, paramMappings: ParamMapping[] }> {
const weightMap = await loadWeightMap(uri, DEFAULT_MODEL_NAME)
const paramMappings: ParamMapping[] = []
const {
......
import { disposeUnusedWeightTensors, loadWeightMap, ParamMapping } from 'tfjs-image-recognition-base';
import * as tf from '@tensorflow/tfjs-core';
import { disposeUnusedWeightTensors, ParamMapping } from 'tfjs-image-recognition-base';
import { loadParamsFactory } from './loadParamsFactory';
import { TinyNetParams } from './types';
const DEFAULT_MODEL_NAME = 'face_landmark_68_tiny_model'
export function extractParamsFromWeigthMapTiny(
weightMap: tf.NamedTensorMap
): { params: TinyNetParams, paramMappings: ParamMapping[] } {
export async function loadQuantizedParamsTiny(
uri: string | undefined
): Promise<{ params: TinyNetParams, paramMappings: ParamMapping[] }> {
const weightMap = await loadWeightMap(uri, DEFAULT_MODEL_NAME)
const paramMappings: ParamMapping[] = []
const {
......
......@@ -3,7 +3,7 @@ import { NetInput, NeuralNetwork, normalize, TNetInput, toNetInput } from 'tfjs-
import { convDown } from './convLayer';
import { extractParams } from './extractParams';
import { loadQuantizedParams } from './loadQuantizedParams';
import { extractParamsFromWeigthMap } from './extractParamsFromWeigthMap';
import { residual, residualDown } from './residualLayer';
import { NetParams } from './types';
......@@ -78,8 +78,12 @@ export class FaceRecognitionNet extends NeuralNetwork<NetParams> {
: faceDescriptorsForBatch[0]
}
protected loadQuantizedParams(uri: string | undefined) {
return loadQuantizedParams(uri)
protected getDefaultModelName(): string {
return 'face_recognition_model'
}
protected extractParamsFromWeigthMap(weightMap: tf.NamedTensorMap) {
return extractParamsFromWeigthMap(weightMap)
}
protected extractParams(weights: Float32Array) {
......
......@@ -9,8 +9,6 @@ import {
import { ConvLayerParams, NetParams, ResidualLayerParams, ScaleLayerParams } from './types';
const DEFAULT_MODEL_NAME = 'face_recognition_model'
function extractorsFactory(weightMap: any, paramMappings: ParamMapping[]) {
const extractWeightEntry = extractWeightEntryFactory(weightMap, paramMappings)
......@@ -46,11 +44,10 @@ function extractorsFactory(weightMap: any, paramMappings: ParamMapping[]) {
}
export async function loadQuantizedParams(
uri: string | undefined
): Promise<{ params: NetParams, paramMappings: ParamMapping[] }> {
export function extractParamsFromWeigthMap(
weightMap: tf.NamedTensorMap
): { params: NetParams, paramMappings: ParamMapping[] } {
const weightMap = await loadWeightMap(uri, DEFAULT_MODEL_NAME)
const paramMappings: ParamMapping[] = []
const {
......
import * as tf from '@tensorflow/tfjs-core';
import { TNetInput } from 'tfjs-image-recognition-base';
import { FaceDetectionWithLandmarks } from '../classes/FaceDetectionWithLandmarks';
import { FullFaceDescription } from '../classes/FullFaceDescription';
import { extractFaces } from '../dom';
import { extractFaces, extractFaceTensors } from '../dom';
import { ComposableTask } from './ComposableTask';
import { nets } from './nets';
......@@ -20,15 +21,20 @@ export class ComputeAllFaceDescriptorsTask extends ComputeFaceDescriptorsTaskBas
public async run(): Promise<FullFaceDescription[]> {
const facesWithLandmarks = await this.detectFaceLandmarksTask
const alignedFaceCanvases = await extractFaces(
this.input,
facesWithLandmarks.map(({ landmarks }) => landmarks.align())
)
return await Promise.all(facesWithLandmarks.map(async ({ detection, landmarks }, i) => {
const descriptor = await nets.faceRecognitionNet.computeFaceDescriptor(alignedFaceCanvases[i]) as Float32Array
const alignedRects = facesWithLandmarks.map(({ alignedRect }) => alignedRect)
const alignedFaces: Array<HTMLCanvasElement | tf.Tensor3D> = this.input instanceof tf.Tensor
? await extractFaceTensors(this.input, alignedRects)
: await extractFaces(this.input, alignedRects)
const fullFaceDescriptions = await Promise.all(facesWithLandmarks.map(async ({ detection, landmarks }, i) => {
const descriptor = await nets.faceRecognitionNet.computeFaceDescriptor(alignedFaces[i]) as Float32Array
return new FullFaceDescription(detection, landmarks, descriptor)
}))
alignedFaces.forEach(f => f instanceof tf.Tensor && f.dispose())
return fullFaceDescriptions
}
}
......@@ -42,8 +48,12 @@ export class ComputeSingleFaceDescriptorTask extends ComputeFaceDescriptorsTaskB
}
const { detection, landmarks, alignedRect } = detectionWithLandmarks
const alignedFaceCanvas = (await extractFaces(this.input, [alignedRect]))[0]
const descriptor = await nets.faceRecognitionNet.computeFaceDescriptor(alignedFaceCanvas) as Float32Array
const alignedFaces: Array<HTMLCanvasElement | tf.Tensor3D> = this.input instanceof tf.Tensor
? await extractFaceTensors(this.input, [alignedRect])
: await extractFaces(this.input, [alignedRect])
const descriptor = await nets.faceRecognitionNet.computeFaceDescriptor(alignedFaces[0]) as Float32Array
alignedFaces.forEach(f => f instanceof tf.Tensor && f.dispose())
return new FullFaceDescription(detection, landmarks, descriptor)
}
......
import * as tf from '@tensorflow/tfjs-core';
import { TNetInput } from 'tfjs-image-recognition-base';
import { FaceDetection } from '../classes/FaceDetection';
import { FaceDetectionWithLandmarks } from '../classes/FaceDetectionWithLandmarks';
import { FaceLandmarks68 } from '../classes/FaceLandmarks68';
import { extractFaces } from '../dom';
import { extractFaces, extractFaceTensors } from '../dom';
import { FaceLandmark68Net } from '../faceLandmarkNet/FaceLandmark68Net';
import { FaceLandmark68TinyNet } from '../faceLandmarkNet/FaceLandmark68TinyNet';
import { ComposableTask } from './ComposableTask';
......@@ -31,12 +32,17 @@ export class DetectAllFaceLandmarksTask extends DetectFaceLandmarksTaskBase<Face
public async run(): Promise<FaceDetectionWithLandmarks[]> {
const detections = await this.detectFacesTask
const faceCanvases = await extractFaces(this.input, detections)
const faceLandmarksByFace = await Promise.all(faceCanvases.map(
canvas => this.landmarkNet.detectLandmarks(canvas)
const faces: Array<HTMLCanvasElement | tf.Tensor3D> = this.input instanceof tf.Tensor
? await extractFaceTensors(this.input, detections)
: await extractFaces(this.input, detections)
const faceLandmarksByFace = await Promise.all(faces.map(
face => this.landmarkNet.detectLandmarks(face)
)) as FaceLandmarks68[]
faces.forEach(f => f instanceof tf.Tensor && f.dispose())
return detections.map((detection, i) =>
new FaceDetectionWithLandmarks(detection, faceLandmarksByFace[i])
)
......@@ -56,10 +62,18 @@ export class DetectSingleFaceLandmarksTask extends DetectFaceLandmarksTaskBase<F
return
}
const faceCanvas = (await extractFaces(this.input, [detection]))[0]
const faces: Array<HTMLCanvasElement | tf.Tensor3D> = this.input instanceof tf.Tensor
? await extractFaceTensors(this.input, [detection])
: await extractFaces(this.input, [detection])
const landmarks = await this.landmarkNet.detectLandmarks(faces[0]) as FaceLandmarks68
faces.forEach(f => f instanceof tf.Tensor && f.dispose())
return new FaceDetectionWithLandmarks(
detection,
await this.landmarkNet.detectLandmarks(faceCanvas) as FaceLandmarks68
landmarks
)
}
......
......@@ -7,8 +7,8 @@ import { FaceLandmarks5 } from '../classes/FaceLandmarks5';
import { bgrToRgbTensor } from './bgrToRgbTensor';
import { CELL_SIZE } from './config';
import { extractParams } from './extractParams';
import { extractParamsFromWeigthMap } from './extractParamsFromWeigthMap';
import { getSizesForScale } from './getSizesForScale';
import { loadQuantizedParams } from './loadQuantizedParams';
import { IMtcnnOptions, MtcnnOptions } from './MtcnnOptions';
import { pyramidDown } from './pyramidDown';
import { stage1 } from './stage1';
......@@ -146,9 +146,12 @@ export class Mtcnn extends NeuralNetwork<NetParams> {
)
}
// none of the param tensors are quantized yet
protected loadQuantizedParams(uri: string | undefined) {
return loadQuantizedParams(uri)
protected getDefaultModelName(): string {
return 'mtcnn_model'
}
protected extractParamsFromWeigthMap(weightMap: tf.NamedTensorMap) {
return extractParamsFromWeigthMap(weightMap)
}
protected extractParams(weights: Float32Array) {
......
import * as tf from '@tensorflow/tfjs-core';
import { Box, createCanvas, getContext2dOrThrow, IDimensions } from 'tfjs-image-recognition-base';
import {
Box,
createCanvas,
createCanvasFromMedia,
env,
getContext2dOrThrow,
IDimensions,
} from 'tfjs-image-recognition-base';
import { normalize } from './normalize';
......@@ -20,7 +27,7 @@ export async function extractImagePatches(
const fromY = y - 1
const imgData = imgCtx.getImageData(fromX, fromY, (ex - fromX), (ey - fromY))
return createImageBitmap(imgData)
return env.isNodejs() ? createCanvasFromMedia(imgData) : createImageBitmap(imgData)
}))
const imagePatchesDatas: number[][] = []
......
import * as tf from '@tensorflow/tfjs-core';
import {
disposeUnusedWeightTensors,
extractWeightEntryFactory,
loadWeightMap,
ParamMapping,
} from 'tfjs-image-recognition-base';
import { disposeUnusedWeightTensors, extractWeightEntryFactory, ParamMapping } from 'tfjs-image-recognition-base';
import { ConvParams, FCParams } from 'tfjs-tiny-yolov2';
import { NetParams, ONetParams, PNetParams, RNetParams, SharedParams } from './types';
const DEFAULT_MODEL_NAME = 'mtcnn_model'
function extractorsFactory(weightMap: any, paramMappings: ParamMapping[]) {
const extractWeightEntry = extractWeightEntryFactory(weightMap, paramMappings)
......@@ -87,11 +80,10 @@ function extractorsFactory(weightMap: any, paramMappings: ParamMapping[]) {
}
export async function loadQuantizedParams(
uri: string | undefined
): Promise<{ params: NetParams, paramMappings: ParamMapping[] }> {
export function extractParamsFromWeigthMap(
weightMap: tf.NamedTensorMap
): { params: NetParams, paramMappings: ParamMapping[] } {
const weightMap = await loadWeightMap(uri, DEFAULT_MODEL_NAME)
const paramMappings: ParamMapping[] = []
const {
......
......@@ -3,7 +3,7 @@ import { NetInput, NeuralNetwork, Rect, TNetInput, toNetInput } from 'tfjs-image
import { FaceDetection } from '../classes/FaceDetection';
import { extractParams } from './extractParams';
import { loadQuantizedParams } from './loadQuantizedParams';
import { extractParamsFromWeigthMap } from './extractParamsFromWeigthMap';
import { mobileNetV1 } from './mobileNetV1';
import { nonMaxSuppression } from './nonMaxSuppression';
import { outputLayer } from './outputLayer';
......@@ -116,8 +116,12 @@ export class SsdMobilenetv1 extends NeuralNetwork<NetParams> {
return results
}
protected loadQuantizedParams(uri: string | undefined) {
return loadQuantizedParams(uri)
protected getDefaultModelName(): string {
return 'ssd_mobilenetv1_model'
}
protected extractParamsFromWeigthMap(weightMap: tf.NamedTensorMap) {
return extractParamsFromWeigthMap(weightMap)
}
protected extractParams(weights: Float32Array) {
......
This diff is collapsed. Click to expand it.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment