Commit 7e766eb0 by vincent

use resizeResults in examples + adjust readme

parent 17b72bd4
......@@ -319,13 +319,13 @@ You can tune the options of each face detector as shown [here](#usage-face-detec
**After face detection, we can furthermore predict the facial landmarks for each detected face as follows:**
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[FaceDetectionWithLandmarks](#interface-face-detection-with-landmarks)>**:
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[WithFaceLandmarks<WithFaceDetection<{}>>](#usage-utility-classes)>**:
``` javascript
const detectionsWithLandmarks = await faceapi.detectAllFaces(input).withFaceLandmarks()
```
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks for that face. Returns **[FaceDetectionWithLandmarks](#interface-face-detection-with-landmarks) | undefined**:
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks for that face. Returns **[WithFaceLandmarks<WithFaceDetection<{}>>](#usage-utility-classes) | undefined**:
``` javascript
const detectionWithLandmarks = await faceapi.detectSingleFace(input).withFaceLandmarks()
......@@ -342,16 +342,16 @@ const detectionsWithLandmarks = await faceapi.detectAllFaces(input).withFaceLand
**After face detection and facial landmark prediction the face descriptors for each face can be computed as follows:**
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[FullFaceDescription](#interface-full-face-description)>**:
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[WithFaceDescriptor<WithFaceLandmarks<WithFaceDetection<{}>>>](#usage-utility-classes)>**:
``` javascript
const fullFaceDescriptions = await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceDescriptors()
const results = await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceDescriptors()
```
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks and face descriptor for that face. Returns **[FullFaceDescription](#interface-full-face-description) | undefined**:
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks and face descriptor for that face. Returns **[WithFaceDescriptor<WithFaceLandmarks<WithFaceDetection<{}>>>](#usage-utility-classes) | undefined**:
``` javascript
const fullFaceDescription = await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceDescriptor()
const result = await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceDescriptor()
```
### Face Recognition by Matching Descriptors
......@@ -361,30 +361,30 @@ To perform face recognition, one can use faceapi.FaceMatcher to compare referenc
First, we initialize the FaceMatcher with the reference data, for example we can simply detect faces in a **referenceImage** and match the descriptors of the detected faces to faces of subsquent images:
``` javascript
const fullFaceDescriptions = await faceapi
const results = await faceapi
.detectAllFaces(referenceImage)
.withFaceLandmarks()
.withFaceDescriptors()
if (!fullFaceDescriptions.length) {
if (!results.length) {
return
}
// create FaceMatcher with automatically assigned labels
// from the detection results for the reference image
const faceMatcher = new faceapi.FaceMatcher(fullFaceDescriptions)
const faceMatcher = new faceapi.FaceMatcher(results)
```
Now we can recognize a persons face shown in **queryImage1**:
``` javascript
const singleFullFaceDescription = await faceapi
const singleResult = await faceapi
.detectSingleFace(queryImage1)
.withFaceLandmarks()
.withFaceDescriptor()
if (singleFullFaceDescription) {
const bestMatch = faceMatcher.findBestMatch(singleFullFaceDescription.descriptor)
if (singleResult) {
const bestMatch = faceMatcher.findBestMatch(singleResult.descriptor)
console.log(bestMatch.toString())
}
```
......@@ -392,12 +392,12 @@ if (singleFullFaceDescription) {
Or we can recognize all faces shown in **queryImage2**:
``` javascript
const fullFaceDescriptions = await faceapi
const results = await faceapi
.detectAllFaces(queryImage2)
.withFaceLandmarks()
.withFaceDescriptors()
fullFaceDescriptions.forEach(fd => {
results.forEach(fd => {
const bestMatch = faceMatcher.findBestMatch(fd.descriptor)
console.log(bestMatch.toString())
})
......@@ -430,7 +430,7 @@ Drawing the detected faces into a canvas:
const detections = await faceapi.detectAllFaces(input)
// resize the detected boxes in case your displayed image has a different size then the original
const detectionsForSize = detections.map(det => det.forSize(input.width, input.height))
const detectionsForSize = faceapi.resizeResults(detections, { width: input.width, height: input.height })
// draw them into a canvas
const canvas = document.getElementById('overlay')
canvas.width = input.width
......@@ -446,7 +446,7 @@ const detectionsWithLandmarks = await faceapi
.withFaceLandmarks()
// resize the detected boxes and landmarks in case your displayed image has a different size then the original
const detectionsWithLandmarksForSize = detectionsWithLandmarks.map(det => det.forSize(input.width, input.height))
const detectionsWithLandmarksForSize = faceapi.resizeResults(detectionsWithLandmarks, { width: input.width, height: input.height })
// draw them into a canvas
const canvas = document.getElementById('overlay')
canvas.width = input.width
......@@ -579,23 +579,34 @@ export interface IFaceLandmarks {
}
```
<a name="interface-face-detection-with-landmarks"></a>
<a name="with-face-detection"></a>
### IFaceDetectionWithLandmarks
### WithFaceDetection
``` javascript
export interface IFaceDetectionWithLandmarks {
export type WithFaceDetection<TSource> TSource & {
detection: FaceDetection
}
```
<a name="with-face-landmarks"></a>
### WithFaceLandmarks
``` javascript
export type WithFaceLandmarks<TSource> TSource & {
unshiftedLandmarks: FaceLandmarks
landmarks: FaceLandmarks
alignedRect: FaceDetection
}
```
<a name="interface-full-face-description"></a>
<a name="with-face-descriptor"></a>
### IFullFaceDescription
### WithFaceDescriptor
``` javascript
export interface IFullFaceDescription extends IFaceDetectionWithLandmarks {
export type WithFaceDescriptor<TSource> TSource & {
descriptor: Float32Array
}
```
......
......@@ -7,7 +7,7 @@ function resizeCanvasAndResults(dimensions, canvas, results) {
// resize detections (and landmarks) in case displayed image is smaller than
// original size
return results.map(res => res.forSize(width, height))
return faceapi.resizeResults(results, { width, height })
}
function drawDetections(dimensions, canvas, detections) {
......
......@@ -11,7 +11,7 @@ async function run() {
const out = faceapi.createCanvasFromMedia(img) as any
faceapi.drawDetection(out, results.map(res => res.detection))
faceapi.drawLandmarks(out, results.map(res => res.faceLandmarks), { drawLines: true, color: 'red' })
faceapi.drawLandmarks(out, results.map(res => res.landmarks), { drawLines: true, color: 'red' })
saveFile('faceLandmarkDetection.jpg', out.toBuffer('image/jpeg'))
}
......
......@@ -18,3 +18,4 @@ export * from './tinyFaceDetector/index';
export * from './tinyYolov2/index';
export * from './euclideanDistance';
export * from './resizeResults';
\ No newline at end of file
......@@ -5,25 +5,29 @@ import { FaceLandmarks } from './classes/FaceLandmarks';
import { extendWithFaceDetection } from './factories/WithFaceDetection';
import { extendWithFaceLandmarks } from './factories/WithFaceLandmarks';
export function resizeResults<T>(obj: T, { width, height }: IDimensions): T {
export function resizeResults<T>(results: T, { width, height }: IDimensions): T {
const hasLandmarks = obj['unshiftedLandmarks'] && obj['unshiftedLandmarks'] instanceof FaceLandmarks
const hasDetection = obj['detection'] && obj['detection'] instanceof FaceDetection
if (Array.isArray(results)) {
return results.map(obj => resizeResults(obj, { width, height })) as any as T
}
const hasLandmarks = results['unshiftedLandmarks'] && results['unshiftedLandmarks'] instanceof FaceLandmarks
const hasDetection = results['detection'] && results['detection'] instanceof FaceDetection
if (hasLandmarks) {
const resizedDetection = obj['detection'].forSize(width, height)
const resizedLandmarks = obj['unshiftedLandmarks'].forSize(resizedDetection.box.width, resizedDetection.box.height)
const resizedDetection = results['detection'].forSize(width, height)
const resizedLandmarks = results['unshiftedLandmarks'].forSize(resizedDetection.box.width, resizedDetection.box.height)
return extendWithFaceLandmarks(extendWithFaceDetection(obj as any, resizedDetection), resizedLandmarks)
return extendWithFaceLandmarks(extendWithFaceDetection(results as any, resizedDetection), resizedLandmarks)
}
if (hasDetection) {
return extendWithFaceDetection(obj as any, obj['detection'].forSize(width, height))
return extendWithFaceDetection(results as any, results['detection'].forSize(width, height))
}
if (obj instanceof FaceLandmarks || obj instanceof FaceDetection) {
return (obj as any).forSize(width, height)
if (results instanceof FaceLandmarks || results instanceof FaceDetection) {
return (results as any).forSize(width, height)
}
return obj
return results
}
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment