Commit 7e766eb0 by vincent

use resizeResults in examples + adjust readme

parent 17b72bd4
...@@ -319,13 +319,13 @@ You can tune the options of each face detector as shown [here](#usage-face-detec ...@@ -319,13 +319,13 @@ You can tune the options of each face detector as shown [here](#usage-face-detec
**After face detection, we can furthermore predict the facial landmarks for each detected face as follows:** **After face detection, we can furthermore predict the facial landmarks for each detected face as follows:**
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[FaceDetectionWithLandmarks](#interface-face-detection-with-landmarks)>**: Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[WithFaceLandmarks<WithFaceDetection<{}>>](#usage-utility-classes)>**:
``` javascript ``` javascript
const detectionsWithLandmarks = await faceapi.detectAllFaces(input).withFaceLandmarks() const detectionsWithLandmarks = await faceapi.detectAllFaces(input).withFaceLandmarks()
``` ```
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks for that face. Returns **[FaceDetectionWithLandmarks](#interface-face-detection-with-landmarks) | undefined**: Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks for that face. Returns **[WithFaceLandmarks<WithFaceDetection<{}>>](#usage-utility-classes) | undefined**:
``` javascript ``` javascript
const detectionWithLandmarks = await faceapi.detectSingleFace(input).withFaceLandmarks() const detectionWithLandmarks = await faceapi.detectSingleFace(input).withFaceLandmarks()
...@@ -342,16 +342,16 @@ const detectionsWithLandmarks = await faceapi.detectAllFaces(input).withFaceLand ...@@ -342,16 +342,16 @@ const detectionsWithLandmarks = await faceapi.detectAllFaces(input).withFaceLand
**After face detection and facial landmark prediction the face descriptors for each face can be computed as follows:** **After face detection and facial landmark prediction the face descriptors for each face can be computed as follows:**
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[FullFaceDescription](#interface-full-face-description)>**: Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[WithFaceDescriptor<WithFaceLandmarks<WithFaceDetection<{}>>>](#usage-utility-classes)>**:
``` javascript ``` javascript
const fullFaceDescriptions = await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceDescriptors() const results = await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceDescriptors()
``` ```
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks and face descriptor for that face. Returns **[FullFaceDescription](#interface-full-face-description) | undefined**: Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks and face descriptor for that face. Returns **[WithFaceDescriptor<WithFaceLandmarks<WithFaceDetection<{}>>>](#usage-utility-classes) | undefined**:
``` javascript ``` javascript
const fullFaceDescription = await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceDescriptor() const result = await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceDescriptor()
``` ```
### Face Recognition by Matching Descriptors ### Face Recognition by Matching Descriptors
...@@ -361,30 +361,30 @@ To perform face recognition, one can use faceapi.FaceMatcher to compare referenc ...@@ -361,30 +361,30 @@ To perform face recognition, one can use faceapi.FaceMatcher to compare referenc
First, we initialize the FaceMatcher with the reference data, for example we can simply detect faces in a **referenceImage** and match the descriptors of the detected faces to faces of subsquent images: First, we initialize the FaceMatcher with the reference data, for example we can simply detect faces in a **referenceImage** and match the descriptors of the detected faces to faces of subsquent images:
``` javascript ``` javascript
const fullFaceDescriptions = await faceapi const results = await faceapi
.detectAllFaces(referenceImage) .detectAllFaces(referenceImage)
.withFaceLandmarks() .withFaceLandmarks()
.withFaceDescriptors() .withFaceDescriptors()
if (!fullFaceDescriptions.length) { if (!results.length) {
return return
} }
// create FaceMatcher with automatically assigned labels // create FaceMatcher with automatically assigned labels
// from the detection results for the reference image // from the detection results for the reference image
const faceMatcher = new faceapi.FaceMatcher(fullFaceDescriptions) const faceMatcher = new faceapi.FaceMatcher(results)
``` ```
Now we can recognize a persons face shown in **queryImage1**: Now we can recognize a persons face shown in **queryImage1**:
``` javascript ``` javascript
const singleFullFaceDescription = await faceapi const singleResult = await faceapi
.detectSingleFace(queryImage1) .detectSingleFace(queryImage1)
.withFaceLandmarks() .withFaceLandmarks()
.withFaceDescriptor() .withFaceDescriptor()
if (singleFullFaceDescription) { if (singleResult) {
const bestMatch = faceMatcher.findBestMatch(singleFullFaceDescription.descriptor) const bestMatch = faceMatcher.findBestMatch(singleResult.descriptor)
console.log(bestMatch.toString()) console.log(bestMatch.toString())
} }
``` ```
...@@ -392,12 +392,12 @@ if (singleFullFaceDescription) { ...@@ -392,12 +392,12 @@ if (singleFullFaceDescription) {
Or we can recognize all faces shown in **queryImage2**: Or we can recognize all faces shown in **queryImage2**:
``` javascript ``` javascript
const fullFaceDescriptions = await faceapi const results = await faceapi
.detectAllFaces(queryImage2) .detectAllFaces(queryImage2)
.withFaceLandmarks() .withFaceLandmarks()
.withFaceDescriptors() .withFaceDescriptors()
fullFaceDescriptions.forEach(fd => { results.forEach(fd => {
const bestMatch = faceMatcher.findBestMatch(fd.descriptor) const bestMatch = faceMatcher.findBestMatch(fd.descriptor)
console.log(bestMatch.toString()) console.log(bestMatch.toString())
}) })
...@@ -430,7 +430,7 @@ Drawing the detected faces into a canvas: ...@@ -430,7 +430,7 @@ Drawing the detected faces into a canvas:
const detections = await faceapi.detectAllFaces(input) const detections = await faceapi.detectAllFaces(input)
// resize the detected boxes in case your displayed image has a different size then the original // resize the detected boxes in case your displayed image has a different size then the original
const detectionsForSize = detections.map(det => det.forSize(input.width, input.height)) const detectionsForSize = faceapi.resizeResults(detections, { width: input.width, height: input.height })
// draw them into a canvas // draw them into a canvas
const canvas = document.getElementById('overlay') const canvas = document.getElementById('overlay')
canvas.width = input.width canvas.width = input.width
...@@ -446,7 +446,7 @@ const detectionsWithLandmarks = await faceapi ...@@ -446,7 +446,7 @@ const detectionsWithLandmarks = await faceapi
.withFaceLandmarks() .withFaceLandmarks()
// resize the detected boxes and landmarks in case your displayed image has a different size then the original // resize the detected boxes and landmarks in case your displayed image has a different size then the original
const detectionsWithLandmarksForSize = detectionsWithLandmarks.map(det => det.forSize(input.width, input.height)) const detectionsWithLandmarksForSize = faceapi.resizeResults(detectionsWithLandmarks, { width: input.width, height: input.height })
// draw them into a canvas // draw them into a canvas
const canvas = document.getElementById('overlay') const canvas = document.getElementById('overlay')
canvas.width = input.width canvas.width = input.width
...@@ -579,23 +579,34 @@ export interface IFaceLandmarks { ...@@ -579,23 +579,34 @@ export interface IFaceLandmarks {
} }
``` ```
<a name="interface-face-detection-with-landmarks"></a> <a name="with-face-detection"></a>
### IFaceDetectionWithLandmarks ### WithFaceDetection
``` javascript ``` javascript
export interface IFaceDetectionWithLandmarks { export type WithFaceDetection<TSource> TSource & {
detection: FaceDetection detection: FaceDetection
}
```
<a name="with-face-landmarks"></a>
### WithFaceLandmarks
``` javascript
export type WithFaceLandmarks<TSource> TSource & {
unshiftedLandmarks: FaceLandmarks
landmarks: FaceLandmarks landmarks: FaceLandmarks
alignedRect: FaceDetection
} }
``` ```
<a name="interface-full-face-description"></a> <a name="with-face-descriptor"></a>
### IFullFaceDescription ### WithFaceDescriptor
``` javascript ``` javascript
export interface IFullFaceDescription extends IFaceDetectionWithLandmarks { export type WithFaceDescriptor<TSource> TSource & {
descriptor: Float32Array descriptor: Float32Array
} }
``` ```
......
...@@ -7,7 +7,7 @@ function resizeCanvasAndResults(dimensions, canvas, results) { ...@@ -7,7 +7,7 @@ function resizeCanvasAndResults(dimensions, canvas, results) {
// resize detections (and landmarks) in case displayed image is smaller than // resize detections (and landmarks) in case displayed image is smaller than
// original size // original size
return results.map(res => res.forSize(width, height)) return faceapi.resizeResults(results, { width, height })
} }
function drawDetections(dimensions, canvas, detections) { function drawDetections(dimensions, canvas, detections) {
......
...@@ -11,7 +11,7 @@ async function run() { ...@@ -11,7 +11,7 @@ async function run() {
const out = faceapi.createCanvasFromMedia(img) as any const out = faceapi.createCanvasFromMedia(img) as any
faceapi.drawDetection(out, results.map(res => res.detection)) faceapi.drawDetection(out, results.map(res => res.detection))
faceapi.drawLandmarks(out, results.map(res => res.faceLandmarks), { drawLines: true, color: 'red' }) faceapi.drawLandmarks(out, results.map(res => res.landmarks), { drawLines: true, color: 'red' })
saveFile('faceLandmarkDetection.jpg', out.toBuffer('image/jpeg')) saveFile('faceLandmarkDetection.jpg', out.toBuffer('image/jpeg'))
} }
......
...@@ -17,4 +17,5 @@ export * from './ssdMobilenetv1/index'; ...@@ -17,4 +17,5 @@ export * from './ssdMobilenetv1/index';
export * from './tinyFaceDetector/index'; export * from './tinyFaceDetector/index';
export * from './tinyYolov2/index'; export * from './tinyYolov2/index';
export * from './euclideanDistance'; export * from './euclideanDistance';
\ No newline at end of file export * from './resizeResults';
\ No newline at end of file
...@@ -5,25 +5,29 @@ import { FaceLandmarks } from './classes/FaceLandmarks'; ...@@ -5,25 +5,29 @@ import { FaceLandmarks } from './classes/FaceLandmarks';
import { extendWithFaceDetection } from './factories/WithFaceDetection'; import { extendWithFaceDetection } from './factories/WithFaceDetection';
import { extendWithFaceLandmarks } from './factories/WithFaceLandmarks'; import { extendWithFaceLandmarks } from './factories/WithFaceLandmarks';
export function resizeResults<T>(obj: T, { width, height }: IDimensions): T { export function resizeResults<T>(results: T, { width, height }: IDimensions): T {
const hasLandmarks = obj['unshiftedLandmarks'] && obj['unshiftedLandmarks'] instanceof FaceLandmarks if (Array.isArray(results)) {
const hasDetection = obj['detection'] && obj['detection'] instanceof FaceDetection return results.map(obj => resizeResults(obj, { width, height })) as any as T
}
const hasLandmarks = results['unshiftedLandmarks'] && results['unshiftedLandmarks'] instanceof FaceLandmarks
const hasDetection = results['detection'] && results['detection'] instanceof FaceDetection
if (hasLandmarks) { if (hasLandmarks) {
const resizedDetection = obj['detection'].forSize(width, height) const resizedDetection = results['detection'].forSize(width, height)
const resizedLandmarks = obj['unshiftedLandmarks'].forSize(resizedDetection.box.width, resizedDetection.box.height) const resizedLandmarks = results['unshiftedLandmarks'].forSize(resizedDetection.box.width, resizedDetection.box.height)
return extendWithFaceLandmarks(extendWithFaceDetection(obj as any, resizedDetection), resizedLandmarks) return extendWithFaceLandmarks(extendWithFaceDetection(results as any, resizedDetection), resizedLandmarks)
} }
if (hasDetection) { if (hasDetection) {
return extendWithFaceDetection(obj as any, obj['detection'].forSize(width, height)) return extendWithFaceDetection(results as any, results['detection'].forSize(width, height))
} }
if (obj instanceof FaceLandmarks || obj instanceof FaceDetection) { if (results instanceof FaceLandmarks || results instanceof FaceDetection) {
return (obj as any).forSize(width, height) return (results as any).forSize(width, height)
} }
return obj return results
} }
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment