Neurotechnology company logo
Menu button

Technical information and specifications

Neurotechnology MegaMatcher ID system provides advanced capabilities for biometric recognition applications, including modality-specific high-level API for all operations.

The MegaMatcher ID system architecture requires to account the performed operations on integrator's or end-user's server:

  • Integrators should ensure that encrypted connection is used for communications with the server.
  • No biometric information is sent to the server during all operations performed using the SDK API, which require communication with the server. The biometric data is kept on the client-side, only transaction accounting information is sent to and received from the server.

Face Modality

Face verification operation can be performed both on client side through the SDK component and on server side through the Web Service component.

SDK component specifications

The following operations are available via the high-level API of the SDK component:

  • Face template creation – a face is captured from camera and the face template is extracted for further usage in the face verification operation.
    • The server returns proprietary encrypted data as a result of an enrolment transaction that has been completed successfully.
    • Face liveness can be optionally checked during this operation. ICAO compliance check can be optionally used to strengthen the liveness check.
    • Age estimation, glasses and hat detection can be optionally enabled for certain usage scenarios. In such cases Face Verification may generate a warning that can be used to exclude from the onboarding process the pictures that do not conform to the expected face quality requirements set in the application.
    • The facial template can be converted to QR code image, which can be used for further verificatoin.
    • A token image of the enrolled face in accordance with ISO 19794-5 criteria can be optionally generated.
    • The template may be saved to any storage (database, file etc) together with custom metainformation (like person's name etc.). Note that the storage functionality is not part of the MegaMatcher ID system, although the programming samples include an example of such implementation).
  • Face verification – a face is captured from the camera, or alternatively a face template captured with VeriLook SDK is verified against the face template which was created during the face template creation operation or obtained from a QR code.
    • Face liveness can be optionally checked during this operation. ICAO compliance check can be optionally used to strengthen the liveness check.
  • Template import – a face template, which was created with VeriLook algorithm, or a face image can be imported into the application, based on Neurotechnology MegaMatcher ID system. Later this template can be used for face verification operation in the same way, as the native templates from the face template creation operation.
  • Liveness check – this operation perform only liveness check of the provided face and only returns the result of the check. See the recommendations for the liveness check below on this page.
    • If the liveness check succeed, a token image of the enrolled face in accordance with ISO 19794-5 criteria can be optionally generated.
    • ICAO compliance check can be optionally used to strengthen the liveness check.

Web Service Component specifications

The following operations are available via the high-level API of the component:

  • Face template creation – a face is captured through a web stream and the face template is extracted for further usage in the face verification operation.
    • Face liveness can be optionally checked during this operation. ICAO compliance check can be optionally used to strengthen the liveness check.
    • Age estimation, glasses and hat detection can be optionally enabled for certain usage scenarios. In such cases Face Verification may generate a warning that can be used to exclude from the onboarding process the pictures that do not conform to the expected face quality requirements set in the application.
    • The facial template can be converted to QR code image, which can be used for further verificatoin.
    • A token image of the enrolled face in accordance with ISO 19794-5 criteria can be optionally generated.
    • The template is saved to server together with custom metainformation (like person's name etc.).
  • Face verification – a face captured through a web stream or alternatively a face template captured with VeriLook SDK is verified against the face template which was created during the face template creation operation or obtained from a QR code.
    • Face liveness can be optionally checked during this operation. ICAO compliance check can be optionally used to strengthen the liveness check.
  • Template import – a face template, which was created with VeriLook algorithm, or a face image can be imported into the application, based on Neurotechnology MegaMatcher ID system. Later this template can be used for face verification operation in the same way, as the native templates from the face template creation operation.
  • Liveness check – this operation perform only liveness check of the provided face and only returns the result of the check. See the recommendations for the liveness check below on this page.
    • If the liveness check succeed, a token image of the enrolled face in accordance with ISO 19794-5 criteria can be optionally generated and stored on the server.
    • ICAO compliance check can be optionally used to strengthen the liveness check.

Basic Recommendations for facial image and posture

The face recognition accuracy heavily depends on the quality of a face image. Image quality during enrollment is important, as it influences the quality of the face template.

  • 32 pixels is the recommended minimal distance between eyes (IOD) for a face on a video stream to perform face template extraction reliably. 64 pixels or more recommended for better face recognition results. Note that this distance should be native, not achieved by resizing the video frames.
  • Several face enrollments are recommended for better facial template quality which results in improvement of recognition quality and reliability.
  • Additional enrollments may be needed when facial hair style changes, especially when beard or mustache is grown or shaved off.
  • The face recognition engine is intended for usage with near-frontal face images and has certain tolerance to face posture:
    • head roll (tilt) – ±15 degrees;
    • head pitch (nod) – ±15 degrees from frontal position.
    • head yaw (bobble) – ±15 degrees from frontal position.

Face Liveness Detection

Certified algorithm for
face liveness check
iBeta badge
Conformance letter from iBeta

The face liveness check algorithm was tested by iBeta and proven to be compliant with ISO 30107-3 Biometric Presentation Attack Detection Standards.

A live video stream from a camera is required for face liveness check:

  • When the liveness check is enabled, it is performed by the face engine before feature extraction. If the face in the stream fails to qualify as "live", the features are not extracted.
  • Only one face should be visible in these frames.
  • At least 1280 x 720 pixels video stream resolution is required for performing face liveness check in compliance with ISO 30107-3 Biometric Presentation Attack Detection Standards. Lower resolution video streams can be used if such compliance is not requred.
  • 80 pixels is the recommended minimal distance between eyes (IOD) for a face to perform liveness check reliably. 100 pixels or more recommended for smoother performance.
  • During passive liveness check the face should be still and the user has to look directly at the camera with ±15 degrees tolerances for roll, pitch and yaw to experience the best performance.
  • Optionally, ICAO compliance check can be used to strengthen the liveness check.
  • Users can enable these liveness check modes:
    • Active – the engine requests the user to perform certain actions like blinking or moving one's head.
      • 5 frames per second or better frame rate required.
      • This mode can work with both colored and grayscale images.
      • This mode requires the user to perform all requested actions to pass the liveness check.
    • Passive – the engine analyzes certain facial features while the user stays still in front of the camera for a short period of time.
      • Colored images are required for this mode.
      • 10 frames per second or better frame rate required.
      • Better score is achieved when users do not move at all.
    • Passive + Blink – the engine analyzes certain facial features while the user stays still in front of the camera for a short period of time, when the engine requests the user to blink.
      • Colored images are required for this mode.
      • 10 frames per second or higher frame rate required.
    • Passive then active – the engine first tries the passive liveness check, and if it fails, tries the active check. This mode requires colored images.
    • Simple – the engine requires user to turn head from side to side while looking at camera.
      • 5 frames per second or better frame rate recommended.
      • This mode can work with both colored and grayscale images.
    • Custom – the engine requires user to turn head in four directions (up, down, left, right), in a random order.
      • 5 frames per second or better frame rate required.
      • This mode can work with both colored and grayscale images.
      • This mode requires the user to perform all requested actions to pass the liveness check.

Slap Modality

The following operations are available via the high-level API of the SDK component:

  • Slap template creation – an upper palm (slap) is captured from camera and the fingerprint template is extracted for further usage in the slap verification operation.
    • The server returns proprietary encrypted data as a result of an enrolment transaction that has been completed successfully.
    • The quality of the captured slap fingerprints' image can be optionally checked during this operation.
    • Single fingerprints can be optionally obtained from the captured slap fingerprints' image as original and binarized images using our proprietary segmentation algorithm. Also, the positions of these fingerprints are generated (i.e. left/right index, middle, etc.)
    • The template may be saved to any storage (database, file etc) together with custom metainformation (like person's name etc.). Note that the storage functionality is not part of the MegaMatcher ID, although the programming samples include an example of such implementation).
  • Slap verification – an upper palm (slap) is captured from the camera and is verified against the slap template which was created during the slap template creation operation.
  • Template import – an upper palm image can be imported into the application, based on the MegaMatcher ID.. Later this template can be used for slap verification operation in the same way, as the native templates from the slap template creation operation.
  • Quality check – the quality of the captured slap fingerprints' image is evaluated. The obtained qualtiy value can be used for rejecting the provided image and asking a user to repeat capture.
  • Fingerprint segmentation– single fingerprints are obtained from the captured slap fingerprints' image using our proprietary segmentation algorithm. The algorithm provides original and binarized fingerprint images Also, the positions of these fingerprints are generated (i.e. left/right index, middle, etc.)

Voice Modality

Voice verification operation can be performed both on client side through the SDK component and on server side through the Web Service component.

SDK component specifications

The following operations are available via the high-level API of the SDK:

  • Voiceprint template creation – a voice sample is captured from microphone and the voiceprint template is extracted for further usage in the voiceprint verification operation.
    • The server returns proprietary encrypted data as a result of an enrolment transaction that has been completed successfully.
    • The template may be saved to any storage (database, file etc) together with custom metainformation (like person's name etc.). Note that the storage functionality is not part of the MegaMatcher ID, although the programming samples include an example of such implementation).
  • Voice verification – a voice sample is captured from microphone and is verified against the voiceprint template which was created during the voiceprint template creation operation.
  • Template import – a voice sample can be imported into the application, based on Neurotechnology MegaMatcher ID. Later this template can be used for voiceprint verification operation in the same way, as the native templates from the voiceprint template creation operation.

Web Service Component specifications

The following operations are available via the high-level API of the component:

  • Voiceprint template creation – a voice sample is captured through a web stream and the voiceprint template is extracted for further usage in the voiceprint verification operation.
    • The template is saved to server together with custom metainformation (like person's name etc.).
  • Voice verification – a voice sample is captured through a web stream and is verified against the voiceprint template which was created during the voiceprint template creation operation.
  • Template import – a voice sample can be imported into the application, based on Neurotechnology MegaMatcher ID. Later this template can be used for voiceprint verification operation in the same way, as the native templates from the voiceprint template creation operation.

Basic Recommendations for voice capture

The speaker recognition accuracy depends on the audio quality during enrollment and identification.

  • Voice samples of at least 2-seconds in length are recommended to assure speaker recognition quality.
  • A passphrase should be kept secret and not spoken in an environment where others may hear it if the speaker recognition system is used in a scenario with unique phrases for each user.
  • The text-independent speaker recognition may be vulnerable to attack with a covertly recorded phrase from a person. Passphrase verification or two-factor authentication (i.e. requirement to type a password) will increase the overall system security.
  • Microphones – there are no particular constraints on models or manufacturers when using regular PC microphones, headsets or the built-in microphones in laptops, smartphones and tablets. However these factors should be noted:
    • The same microphone model is recommended (if possible) for use during both enrollment and recognition, as different models may produce different sound quality. Some models may also introduce specific noise or distortion into the audio, or may include certain hardware sound processing, which will not be present when using a different model. This is also the recommended procedure when using smartphones or tablets, as different device models may alter the recording of the voice in different ways.
    • The same microphone position and distance is recommended during enrollment and recognition. Headsets provide optimal distance between user and microphone; this distance is recommended when non-headset microphones are used.
    • Web cam built-in microphones should be used with care, as they are usually positioned at a rather long distance from the user and may provide lower sound quality. The sound quality may be affected if users subsequently change their position relative to the web cam.
  • Sound settings:
    • Settings for clear sound must be ensured; some audio software, hardware or drivers may have sound modification enabled by default. For example, the Microsoft Windows OS usually has, by default, sound boost enabled.
    • A minimum 8000 Hz sampling rate, with at least 16-bit depth, should be used during voice recording.
  • Environment constraints – the speaker recognition engine is sensitive to noise or loud voices in the background; they may interfere with the user's voice and affect the recognition results. Usually, a quiet environment for enrollment and recognition is enough.
Facebook icon   LinkedIn icon   Twitter icon   Youtube icon   Email newsletter icon
Copyright © 1998 - 2024 Neurotechnology