AS ISO IEC 19795.2:2010 pdf – lnformation technology-Biometric
performance testing and reporting Part 2: Testing methodologies for technology and scenario evaluation.
4.3.7
guidance
dwectlon provided by an Mminlstrator to a Test Subject in the course of enrolment 01 recognItion
NOTE Guidance is separate from feedback provided by a biometric system or device in the cowse of enrolment or recognition, such as audible or visual presentation queues.
4.3.8
habituation
degree of familiarity a Test Subject has with a device
NOTE A That Subject having substantial familiarity with a biometric device, such as that gained In the course of employment, is referred to as a habituated Test Subject.
4.3.9
comparison attempt
submission of one or more biometric samples for a Test Subject for the purpose of comparison In a biometric system
4.3.10
comparison attempt limit
maxwnum number of attempts, or the maximum duration, a Test Subject is permitted before a comparison transaction is terminated
4.3.11
comparison presentation
submission of an instance of a single biometric characteristic for a Test Subject for the purpose of comparison
NOTE One or more cornpanson presentations may be pernstted or required to constitote a comparison attempt. A comparison presentation may or may not result in a comparison attempt
4.3.12
comparison presentation limit
maxrnum number of presentahon5, or the maximum duration, a Test Subject is permitted before a comparison attempt is terminated
4.4 Performance measures
4.4.1
failure at source rate
proportion of samples discarded from the corpus either manually or by use of an automated biometric system prior to use in a technology evaluation
A proportion of images collected In a face data collection effort may be discded due to lack of a face In
5 Overview of technology evaluations and scenario evaluations
This standard addresses two types of evaluation methodologies: technology evaluations and scenario evaluations. A test report shall state whether it presents results front a technology evaluation, a scenario evaluation, or an evaluation that combines aspects of both technology and scenario evaluations.
Technology evaluation Is the offline evaluation of one or more algorithms for the same biometric modality using a pm-existing or specially-collected corpus of samples. The utility of technology testing stems from its separation of the human-sensor acquisition interaction and the recognition process, whose benefits indude the following:
Ability to conduct full cross-companson tests. Technology evaluation affords the possibility to use the entire testing population as claimants to the identilios of all other members (I.e. impostors) and this allows estimates of false match rates to be made to on the order of one in N2. rather than one en N.
Ability to conduct exploratory testing. Technology evaluation can be run with no real-time output demands, and is thus well-suited to research and development. For example, the effects of algorithmic improvements, changes in run time parameters such as afforl levels and configurations, or different image databases, can be measured in, essentially, a closed-loop Improvement cycle.
Ability to conduct multi-instance and multi-algorithmic testing. By using common test procedures, interfaces, and metrics, technology evaluation affords the possibility to conduct repeatable evaluations of multi-instance systems (e.g. three views of a face) and multi-algorithmic (e.g. supplier A and supplier B) performance, or any combination thereof.
— Proved the corpus contains appropriate sample data, technology testing is potentially capable of testing all modules subsequent to the human-sensor interface, including a quality control and feedback module(s), signal processing module(s), image fusion module(s) (for multi-modal or multi-instance biometrics), feature extraction and normalization module(s), feature-level fusion module(s). companson score computation and fusion module(s), and score normalization module(s).
— The nondeterministic aspects of the human-sensor interaction preclude true repeatabdity and this complicates comparative product testing. Elimination of this interaction as a factor in performance measurement allows for repeatable testing. This offline process can be repeated ad UIlnttum with little marginal cosi
If sample data is available, performance can be measured over very large target populations, utiizing samples acquired over a period of years
NOTE 1 Collecting a ciatabase of samples for offhne enrolment and calculation of cornperlson saxes allows greater control over wtiicit samples and attempts are to be used in any transaction.
NOTE 2 Technology evaluation wdl always involve data stOrage for later, aMine procesang. However, with scenario evaluations. orne transactions might be sinipler for the tester — the system Is operating In its usual manner and storage of samples, although recommended, is not absolutely necessary.