VeriLook SDK

Face identification for stand-alone or Web applications

VeriLook facial identification technology is designed for biometric systems developers and integrators. The technology assures system performance and reliability with live face detection, simultaneous multiple face recognition and fast face matching in 1-to-1 and 1-to-many modes.

Available as a software development kit that allows development of stand-alone and Web-based solutions on Microsoft Windows, Linux, Mac OS X, iOS and Android platforms.

Basic Recommendations for Facial Recognition

Face recognition accuracy of VeriLook heavily depends on the quality of a face image. Image quality during enrollment is important, as it influences the quality of the face template.

General recommendations

  • 32 pixels is the recommended minimal distance between eyes for a face on image or video stream to perform face template extraction reliably. 64 pixels or more recommended for better face recognition results. Note that this distance should be native, not achieved by resizing an image.
  • Several images during enrollment are recommended for better facial template quality which results in improvement of recognition quality and reliability.
  • Additional enrollments may be needed when facial hair style changes, especially when beard or mustache is grown or shaved off.

Face Posture

The face recognition engine has certain tolerance to face posture:

  • head roll (tilt) – ±180 degrees (configurable);
    • ±15 degrees default value is the fastest setting which is usually sufficient for most near-frontal face images.
  • head pitch (nod) – ±15 degrees from frontal position.
    • The head pitch tolerance can be increased up to ±25 degrees if several views of the same face that covered different pitch angles were used during enrollment.
  • head yaw (bobble) – ±45 degrees from frontal position (configurable).
    • ±15 degrees default value is the fastest setting which is usually sufficient for most near-frontal face images.
    • 30 degrees difference between a face template in a database and a face image from camera is acceptable.
    • Several views of the same face can be enrolled to the database to cover the whole ±45 degrees yaw range from frontal position.

Live Face Detection

A stream of consecutive images (usually a video stream from a camera) is required for face liveness check:

  • When the liveness check is enabled, it is performed by the face engine before feature extraction. If the face in the stream fails to qualify as "live", the features are not extracted.
  • Only one face should be visible in these frames.
  • Users can enable these liveness check modes:
    • Active – the engine requests the user to perform certain actions like blinking or moving one's head.
      • 5 frames per second or better frame rate required.
      • This mode can work with both colored and grayscale images.
      • This mode requires the user to perform all requested actions to pass the liveness check.
    • Passive – the engine analyzes certain facial features while the user stays still in front of the camera for a short period of time.
      • Colored images are required for this mode.
      • 10 frames per second or better frame rate required.
      • Better score is achieved when users do not move at all.
    • Passive then active – the engine first tries the passive liveness check, and if it fails, tries the active check. This mode requires colored images.
    • Simple – the engine requires user to turn head from side to side while looking at camera.
      • 5 frames per second or better frame rate recommended.
      • This mode can work with both colored and grayscale images.