Icip 2016 Competition on Mobile Ocular Biometric Recognition
Total Page:16
File Type:pdf, Size:1020Kb
ICIP 2016 COMPETITION ON MOBILE OCULAR BIOMETRIC RECOGNITION Ajita Rattani Reza Derakhshani Sashi K. Saripalle† Vikas Gottemukkula† University of Missouri- Kansas City, USA †EyeVerify Inc., USA ABSTRACT conjunctival and episceral vasculature3 [5], and periocular re- gion [6] for personal recognition. Textural descriptors (such With the unprecedented mobile technology revolution, a num- as LBP, LQP and BSIF) and image keypoint and patch de- ber of ocular biometric based personal recognition schemes scriptors (such as SIFT and SURF) have been mostly used ei- have been proposed for mobile use cases. The aim of this ther in a learning or non-learning based framework for iden- competition is to evaluate and compare the performance of tity verification in mobile ocular biometrics [6, 7, 3]. How- mobile ocular biometric recognition schemes in visible light ever, state-of-the-art related to mobile ocular biometric recog- on a large scale database (VISOB Dataset ICIP2016 Chal- nition schemes is nascent. As such, many of the earlier mo- lenge Version) using standard evaluation methods. Four dif- bile ocular biometric recognition algorithms did not have ac- ferent teams from universities across the world participated in ceptable error rates, especially when tested under challenging this competition, submitting five algorithms altogether. The mobile use cases. Further, very few mobile ocular databases, submitted algorithms applied different texture analysis in a such as MICHE [1] and VSSIRIS [2], have been publicly learning or a non-learning based framework for ocular recog- available for research and development. Moreover, the rel- nition. The best results were obtained by a team from Norwe- ative low number of subjects in the aforesaid datasets limits gian Biometrics Laboratory (NTNU, Norway), achieving an statistical power of the ensuing calculations. Equal Error Rate of 0.06% over a quarantined test set. Thus, to facilitate advancement of research in the field of Index Terms— Mobile Biometrics, Ocular Biometrics, mobile ocular biometrics in visible wavelength: VISOB Dataset ICIP2016 Challenge Version, Visible Spec- trum, Eye Image Classification • We collected a large scale publicly available Visi- ble Light Mobile Ocular Biometric dataset (VISOB Dataset ICIP2016 Challenge Version) [8] comprising 1. INTRODUCTION of eye images captured from 550 subjects using front facing (selfie) cameras of three different mobile de- With increasing functionality and services accessible via mo- vices, namely Oppo model N1 (13 MP, autofocus), bile phones, the industry has turned its focus to the integration Samsung Galaxy Note 4 (3.7 MP, fixed focus) and of biometric technologies in mobile phones as a convenient iPhone 5s(1.2 MP, fixed focus). This dataset presents method of verifying the identity of a person accessing mobile possible intra-class variations due to the nature of mo- services. The use of biometric techniques on mobile devices bile front facing cameras and everyday mobile biomet- has been referred to as mobile biometrics [1, 2, 3], which ric use cases, such as out-of-focus images, occlusions encompasses the sensors that acquire biometric signals, and due to prescription glasses, different illumination con- 1 software algorithms for their verification . ditions, gaze deviations, eye-makeup (i.e., eye liner and 2 According to Acuity Market Intelligence forecast , mo- mascara), specular reflections, and motion blur. bile biometric revenue is expected to surpass 33 billion dol- lars by 2020, not just for unlocking the device but to approve • Further, we conducted an international competition on payments and as a part of multi-factor authentication services. VISOB Dataset ICIP2016 Challenge Version for large Consequently, recent research has been focused on develop- scale evaluation of the mobile ocular recognition al- ing biometric recognition schemes tailored for mobile envi- gorithms by different research groups from around the ronment. world. The competition evaluated the performance of In this context, mobile ocular biometrics has gained in- submitted algorithms over a quarantined portion of the creased attention from research community [4]. It comprises dataset that was not available to the participants. of scanning regions in the eye and those around it i.e., iris, 3These conjunctival and episcleral vascular patterns seen on the white of 1The term recognition and verification has been used interchangeably. the eye, or sclera, have sometimes been mistakenly ascribed to sclera itself, 2http://www.acuity-mi.com/GBMR Report.php which is avascular. ,((( ,&,3 This competition, besides benchmarking the performance of submissions over VISOB Dataset ICIP2016 Challenge Ver- sion, fosters independent validation of the algorithms and fu- ture research and development by the academic community. Fig. 1. Sample eye images from VISOB Dataset ICIP2016 Chal- Four universities and an industry participant submitted lenge Version [8] containing variations such as (a) light and (b) dark five algorithms to this competition. The participants include irides, (c) reflection, and (d) imaging artifact. Norwegian Biometrics Laboratory, Norwegian University of Science and Technology (NTNU), Norway; Australian National University (ANU), Australia; Indian Institute of Table 1. Characteristics of the enrollment and validation sets of 1 2 Information Technology Guwahati (IIITG), India and IBM Visit and of VISOB Dataset ICIP2016 Challenge Version used by the participants and organizers, respectively. Research India, and an anonymous team (anonymized per participant’s request). Mobile Device Enrollment Set Validation Set This paper is organized as follows: In section 2, we briefly (# of images) (# of images) review the database and evaluation protocol used for the com- VISIT 1 3 iPhone 14077 13208 petition. Section briefly describes all the participating al- Oppo 21976 21349 gorithms. We discuss the consolidated results in section 4. Samsung 12197 12240 Conclusions are drawn in section 5. VISIT 2 iPhone 12222 11740 Oppo 10438 9857 2. DATABASE AND PROTOCOL Samsung 9284 9548 2.1. VISOB Dataset ICIP2016 Challenge Version: Visible light mobile Ocular Biometric (VISOB) Dataset used the data belonging to Visit 2 (captured at least 2 weeks ICIP2016 Challenge Version [8] is a publicly available after Visit 1 collection) from about 290 subjects with 12 sam- database consisting of eye images from 550 healthy adult ples per subject, for the evaluation of the executables, submit- volunteers acquired using three different smartphones i.e., ted by the participants. In our evaluation, Session 1 of Visit 2 iPhone 5s, Samsung Note 4 and Oppo N1. The iPhone was was used for enrollment and its Session 2 was used for perfor- set to capture bursts of still images at 720p resolution, while mance evaluation. Table 1 shows the total number of images the the Samsung and Oppo devices were capturing bursts of in the VISOB Dataset ICIP2016 Challenge Version subsets still images at 1080p resolution using pixel binning. Vol- used by participants (Visit 1) and by the organizers (Visit 2) unteers’ data were collected during two visits (Visit 1 and for enrollment and evaluation of the submitted algorithms. Visit 2), 2 to 4 weeks apart. At each visit, volunteers were The performance evaluation was done using a standard asked to take selfie like captures using front facing cameras biometric evaluation metric i.e., Equal Error Rate (EER) of the aforementioned three mobile devices in two different which is the operating point at which false acceptance rate sessions (Session 1 and Session 2) that were about 10 to 15 (FAR) is equal to false rejection rate (FRR). minutes apart. The volunteers used the mobile phones nat- Next, we briefly discuss the algorithms submitted by the urally, holding the devices 8 to 12 inches from their faces. participants. For each session, a number of images were captured under three lighting conditions: regular office light, dim light (of- 3. SUMMARY OF PARTICIPANTS’ ALGORITHMS fice lights off but dim ambient lighting still present), and natural daylight settings (next to large sunlit windows). The 3.1. Norwegian Biometrics Laboratory, NTNU, Norway collected database was preprocessed to crop and retain only the eye regions of size 240 × 160 pixels using a Viola-Jones Norwegian Biometrics Laboratory submitted two different al- based eye detector. Figure 1 shows sample eye images from gorithms, henceforth referred to as NTNU-1 and NTNU-2,as VISOB Dataset ICIP2016 Challenge Version [8] exhibiting follows: variations such as light and dark irides, reflection, make-up 1 and imaging artifacts. 1. NTNU- [9]: is a scheme for periocular recognition based on deep neural networks trained using regular- ized stacked autoencoders [9]. Feature extraction was 2.2. Protocol done using Maximum Response (MR) based texture The Visit 1 subset of the dataset, containing the corresponding features [10] extracted by computing the response to Session 1 and Session 2,(550 subjects with about 12 samples the MR filter bank that comprised of 38 filters. These per subject), was made available to the participants. Partic- 38 filters include Gaussian, Laplacian of Gaussian, and ipants were instructed to use Session 1 for training (enroll- Edge and Oriented filters at six different orientations. ment) and Session 2 for validation of their algorithms. We A deep network was formed by coupling all four en- coders along with the softmax layer. Similarity scores Table 2. Textural features utilized by the participants’ algorithms