n
Ponder, Christopher John (2015) A generic computer platform for efficient iris recognition. EngD thesis. http://theses.gla.ac.uk/6780/
Copyright and moral rights for this thesis are retained by the author
A copy can be downloaded for personal non-commercial research or study, without prior permission or charge
This thesis cannot be reproduced or quoted extensively from without first obtaining permission in writing from the Author
The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the Author
When referring to this work, full bibliographic details including the author, title, awarding institution and date of the thesis must be given
Glasgow Theses Service http://theses.gla.ac.uk/ [email protected]
A Generic Computer Platform For Efficient Iris Recognition
Christopher John Ponder BEng (Hons)
A thesis submitted to the Universities of
Edinburgh, Glasgow, Heriot-Watt, and Strathclyde
For the Degree of Doctor of Engineering in System Level Integration
© Christopher John Ponder 2014 To Rachel, Alex and Ben Abstract
This document presents the work carried out for the purposes of completing the Engineering Doctorate (EngD) program at the Institute for System Level Integration (iSLI), which was a partnership between the universities of Edinburgh, Glasgow, Heriot- Watt and Strathclyde. The EngD is normally undertaken with an industrial sponsor, but due to a set of unforeseen circumstances this was not the case for this work. However, the work was still undertaken to the same standards as would be expected by an industrial sponsor.
An individual’s biometrics include fingerprints, palm-prints, retinal, iris and speech patterns. Even the way people move and sign their name has been shown to be uniquely associated with that individual. This work focuses on the recognition of an individual’s iris patterns.
The results reported in the literature are often presented in such a manner that direct comparison between methods is difficult. There is also minimal code resource and no tool available to help simplify the process of developing iris recognition algorithms, so individual developers are required to write the necessary software almost every time. Finally, segmentation performance is currently only measurable using manual evaluation, which is time consuming and prone to human error.
This thesis presents a completely novel generic platform for the purposes of developing, testing and evaluating iris recognition algorithms which is designed to simplify the process of developing and testing iris recognition algorithms. Existing open-source algorithms are integrated into the generic platform and are evaluated using the results it produces.
Three iris recognition segmentation algorithms and one normalisation algorithm are proposed. Three of the algorithms increased true match recognition performance by between two and 45 percentage points when compared to the available open-source algorithms and methods found in the literature. A matching algorithm was developed that significantly speeds up the process of analysing the results of encoding. Lastly, this work also proposes a method of automatically evaluating the performance of segmentation algorithms, so minimising the need for manual evaluation. Declaration of Originality
I, Christopher John Ponder, declare that this thesis is my own work and has not been submitted in any form for another degree or diploma at any university or other institute of tertiary education. Information derived from the published and unpublished work of others has been acknowledged in the text and a list of references is given in the bibliography.
The material contained in this thesis is my own original work produced under the supervision of Khaled Benkrid and Martin Reekie.
Dedication and Acknowledgements
This thesis is dedicated to my family, Rachel, Alex & Ben, whose love, encouragement and understanding made this possible.
I would like to thank Khaled Benkrid, my academic supervisor for the first three years of my research, for believing in me and for his support and encouragement. I would like to express my sincere gratitude and deepest appreciation to Martin Reekie who has been there for me though-out my journey and has been my academic supervisor for the last two years. Their support and encouragement over the course of my studies have been invaluable.
I would like to thank Theresa, for her understanding, kind words and sympathetic ear, without which this would have been a far more difficult journey.
I would also like to thank the people at Tritech International Limited for their patience, understanding and timely assistance during the writing of this thesis.
Finally, I would like to thank the all staff at the iSLI and Glasgow University for their help and support throughout my studies.
Page 6 of 380 Portions of the research in this thesis use the CASIA-Iris V1, V2, V3 and V4 image datasets collected by the Centre for Biometrics and Security Research at the Chinese Academy of Sciences Institute of Automation, http://biometrics.idealtest.org/ , and http://www.cbsr.ia.ac.cn/IrisDataset.htm
Portions of the research in this thesis use the MMU1 and MMU2 image datasets, http://pesona.mmu.edu.my/~ccteo/
Portions of the research in this thesis use the UBIRIS V1 and V2 image datasets collected by Hugo Pedro Proença and Luís A. Alexandre of Department of Computer Science, University of Beira Interior, http://iris.di.ubi.pt/ubiris1.html and http://iris.di.ubi.pt/ubiris2.html
Portions of the research in this thesis were tested using version 1.0 of the IITD Iris Database http://www4.comp.polyu.edu.hk/~csajaykr/IITD/Database_Iris.htm
Table of Contents
Abstract ...... 3
Dedication and Acknowledgements ...... 5
List of Figures ...... 12
List of Tables ...... 19
List of Image Datasets ...... 23
List of Symbols ...... 24
Glossary ...... 27
1 Introduction ...... 34
1.1 Motivation ...... 35
1.2 Thesis Outline ...... 39
1.3 Thesis Outcomes ...... 43
2 Literature Review ...... 45
2.1 Image Capture ...... 48
2.2 Pre-Processing ...... 54
2.3 Segmentation ...... 56
2.4 Normalisation...... 61
2.5 Encoding ...... 64
2.6 Matching ...... 68
2.7 Generic Platform ...... 75
2.8 Conclusions ...... 77
3 Designing the Generic Platform ...... 78
3.1 Development Environment and Coding Language ...... 81 3.1.1 Background and Thinning the Field ...... 81 3.1.2 C/C++ ...... 82 3.1.3 Java ...... 83 3.1.4 MATLAB ...... 84 3.1.5 FreeMAT ...... 85 3.1.6 Octave...... 86 3.1.7 SciLab...... 86 Page 8 of 380 3.1.8 Conclusion ...... 87
3.2 Measurable Algorithm Performance via Results ...... 90 3.2.1 Genuine versus Imposter Hamming Distance Graph ...... 90 3.2.2 Decidability ...... 94 3.2.3 FRR, FAR, EER and ROC ...... 95 3.2.4 Conclusion ...... 98
3.3 Changing Iris Operations without Recompilation ...... 101
3.4 In-built Iris Recognition Operations ...... 103
3.5 Fast Iris Recognition Algorithm Swapping ...... 104
3.6 Image Datasets ...... 105
3.7 Image Dataset Selection...... 108 3.7.1 CASIA V1 [4] ...... 108 3.7.2 CASIA V2 [5] ...... 109 3.7.3 CASIA V3 [6] ...... 109 3.7.4 CASIA V4 [7] ...... 111 3.7.5 IIT Delhi [13, 14] ...... 111 3.7.6 MMU [8] ...... 112 3.7.7 WVU [11, 12] ...... 112 3.7.8 UBIRIS [9] [10] ...... 113 3.7.9 UPOL [15, 16] ...... 114 3.7.10 ICE 2005 [17] ...... 114 3.7.11 Summary of Datasets ...... 115 3.7.12 Selection of Datasets for Testing ...... 119
3.8 The Generic GUI ...... 123
3.9 Conclusions ...... 137
4 Evaluating the Generic Platform ...... 139
4.1 Developing the Generic Platform with Masek’s Iris Recognition Algorithm ...... 143 4.1.1 Results ...... 145 4.1.2 Conclusions ...... 153
4.2 Further Dataset Evaluation with Masek’s Algorithm ...... 154 4.2.1 Conclusions ...... 170
4.3 Adding Other Open-source Iris Recognition Algorithms ...... 172
Page 9 of 380 4.3.1 Antonino’s Encoding and Matching Code ...... 173 4.3.2 Li’s Iris Recognition Code ...... 178 4.3.3 The JIRRM Segmentation and Normalisation Code ...... 187 4.3.4 The OSIRIS Iris Recognition Code ...... 194 4.3.5 Conclusions ...... 199
4.4 Downloadable Iris Recognition Algorithms that were not Used ...... 201 4.4.1 Iris Recognition System ...... 202 4.4.2 High Performance Iris Recognition...... 204 4.4.3 GIRIST ...... 207
4.5 Testing Segmentation Algorithms from the Literature Using the Generic Platform209 4.5.1 Lin et al.’s Segmentation Algorithm ...... 210 4.5.2 Lu’s Segmentation Algorithm ...... 211 4.5.3 The Shamsi-Lin Segmentation Algorithm ...... 212 4.5.4 Results ...... 213 4.5.5 Conclusions ...... 217
4.6 Adding Support Functions ...... 219 4.6.1 Edge Detection Routines ...... 221 4.6.2 Hough Circle Detection Routine ...... 224 4.6.3 Conclusions ...... 226
4.7 Evaluating the Performance of MATLAB...... 227 4.7.1 Performance of the ‘Fixed Flow’ compared against the ‘Custom Flow’ ...... 228 4.7.2 Determining the Performance Increase of Converting MATLAB code to OS Native code ...... 229 4.7.3 The Performance Effects the OS and PC Hardware ...... 230 4.7.4 Boosting Performance of the Generic Platform Using Pre-calculation ...... 233 4.7.5 Conclusions ...... 235
4.8 Chapter Conclusions ...... 236
5 Algorithms Developed as Part of this Work ...... 238
5.1 Segment One ...... 241 5.1.1 Segment One Results ...... 243
5.2 Segment Two ...... 246 5.2.1 Initial Segment Two Results ...... 253 5.2.2 Sum-of-Columns Method ...... 256
Page 10 of 380 5.2.3 Final Segment Two Results...... 260 5.2.4 Conclusions ...... 261
5.3 Segment Three ...... 262 5.3.1 Segment Three Results ...... 266 5.3.2 Conclusions ...... 269
5.4 Automatic Segmentation Evaluation ...... 270 5.4.1 Automatic Segmentation Results ...... 273 5.4.2 Conclusions ...... 275
5.5 Normalise One ...... 276
5.6 Improving Recognition Performance During Normalisation ...... 280 5.6.1 Intelligent Noise Infilling of Noise Within the unwrapped iris ...... 282 5.6.2 Noise Removal Using Standard Deviation Thresholding ...... 285 5.6.3 Oversampling to Improve Noise Immunity ...... 289 5.6.4 Normalise Two ...... 291 5.6.5 Conclusions ...... 294
5.7 Segmentation Mask One (SM1) ...... 295
5.8 Match One...... 296 5.8.1 Results ...... 303 5.8.2 Conclusions ...... 305
5.9 Compare Result Databases ...... 306
5.10 Comparison of the Methods Developed with the Literature ...... 309
5.11 Chapter Conclusions ...... 314
6 Conclusions and Future Work ...... 316
6.1 Summary and Conclusions ...... 316
6.2 Future work ...... 322
Appendix A Histogram Thresholding to Find the Pupil Boundary ...... 325
Appendix B Erroneously Good Results from the Similarity Between Images ...... 328
Appendix C The Hough Transform ...... 330
Appendix D The Circle Bisector Method ...... 335
Appendix E Published Paper, ICB 2012 ...... 337
Appendix F Filenames of Images Removed from CASIA V1 ...... 345
Page 11 of 380 Appendix G Example Analysis Output ...... 347
Appendix H Quick-Start User Manual ...... 350
H.1 Installation and Setup of the Generic Platform ...... 350
H.2 Getting Started With Using the Generic Platform ...... 358
7 Bibliography ...... 362
List of Figures
Figure 2.1 – Image 6_3 from the WVU Free image dataset, with each of the features of interest for the purposes of iris recognition ...... 45 Figure 2.2 – A block diagram of Daugman’s iris image-capture setup [51]...... 49 Figure 2.3 – A block diagram of the iris image-capture setup used by Wildes et al. [51] .. 50 Figure 2.4 – Example of a close-up iris image capture device being used [52] ...... 51 Figure 2.5 – CASIA V3i image S1004R05 with segmentation circles determined by Segment Three iris segmentation algorithm discussed in Section 5.3 ...... 56 Figure 2.6 – Edge image resulting from the integro-differential operator being applied to CASIA V3i image S1004R05 ...... 58 Figure 2.7 – Edge image resulting from the Canny edge detector being applied to CASIA V3i image S1004R05 ...... 59 Figure 2.8 – CASIA V3i image S1004R05 with segmentation circles ...... 61 Figure 2.9 – Daugman’s rubber sheet model [77, 31], translating from Cartesian to polar coordinates ...... 61 Figure 2.10 – Daugman’s rubber sheet mode1 [77, 31] providing pupil displacement correction showing exaggerated pupil displacement for illustration ...... 62 Figure 2.11 – Unwrapped iris pattern of Figure 2.8 ...... 62 Figure 2.12 – Mask for Figure 2.11 unwrapped iris pattern ...... 63 Figure 2.13 – Daugman’s phase quantisation code sequence [76] ...... 65 Figure 2.14 – Daugman iris code of Figure 2.11 ...... 65 Figure 2.15 – Wildes multi-scale iris code [51] showing the results of successive sub- sampling of the lower frequency bands of the iris pattern ...... 66 Figure 2.16 – Example iris codes with XOR calculation and Hamming distance calculation ...... 69 Figure 2.17 – Example iris codes with XOR calculation and Hamming distance calculation. Code 2 has been shifted once to the left ...... 70 Figure 2.18 – Example iris codes with XOR calculation and Hamming distance calculation. Code 2 has been shifted left a second time...... 70 Figure 2.19 – Example iris codes with XOR calculation, the AND operation using masks and the Hamming distance calculation ...... 70 Figure 2.20 – Example iris codes with XOR calculation, the AND operation using masks and the Hamming distance calculation. Code 2 has been shifted once to the left ...... 71 Page 13 of 380 Figure 2.21 – Example iris codes with XOR calculation, the AND operation using masks and the Hamming distance calculation. Code 2 has been shifted left a second time...... 72 Figure 3.1 – A generic Daugman iris code generation process...... 79 Figure 3.2 – “Statistical Decision Theory: general formulation for decisions under uncertainty” [1] ...... 91 Figure 3.3 – Example Hamming distance distribution for authentics and imposters produced using Segmentation Three on the CASIA V3-interval image dataset. Graph generated using the generic platform...... 92 Figure 3.4 – Example Hamming distance distribution for authentics and imposters produced using Antonino’s algorithm on the MMU V1 image dataset. Graph generated using the generic platform...... 93 Figure 3.5 – Example FAR against FRR graph for Segment Three with Normalise Three, Masek’s encoded and Match Three using the CASIA V1 dataset within the generic platform...... 95 Figure 3.6 – The ROC curve from Lin et al. 's method [30] ...... 96 Figure 3.7 – An almost ideal 1-FRR ROC graph produced using the generic platform with Segmentation Three using the IITD dataset...... 97 Figure 3.8 – Input image example from CASIA V1 database ...... 103 Figure 3.9 – Binary biometric template, code (top) with mask (bottom) ...... 103 Figure 3.10 – Iris GUI requesting the image database root directory ...... 106 Figure 3.11 – Iris GUI image dataset list ...... 106 Figure 3.12 – Generic Platform GUI running PC1 with ‘Manual Process’ selected and GUI regions numbered in blue ...... 125 Figure 3.13 – Generic Platform GUI running PC1 with ‘Auto Process’ selected and GUI regions numbered in blue ...... 126 Figure 3.14 – Flow chart depicting the selection choices available options for iris recognition algorithms added as a fixed flow...... 127 Figure 3.15 – Generic Platform User Experience Flowchart ...... 130 Figure 3.16 – Generic platform normalisation pane showing a normalised iris with its mask and two debug images...... 131 Figure 3.17 – Image (16) from Figure 3.16 when opened in a separate figure window .... 133 Figure 3.18 – Generic platform encoding pane showing an iris code with its mask ...... 133 Figure 3.19 – Generic platform matching pane showing the results from matching the currently selected image (10_3.bmp from WVU Free dataset) ...... 134
Page 14 of 380 Figure 3.20 – Generic platform results pane showing the reference segmentation for image 010204 from the MMU V2 image dataset...... 135 Figure 3.21 – Area (24) in Figure 3.20 after selecting the ‘I’ button to zoom in to the iris boundary ...... 136 Figure 3.22 – Area (24) in Figure 3.20 after selecting the ‘P’ button to zoom in to the pupil boundary ...... 136 Figure 4.1 – FAR and FRR curves for Masek’s thesis results using CASIA-a image dataset (Left) and Masek’s algorithm (Right) when executed within the generic platform using CASIA V1 image dataset ...... 148 Figure 4.2 – The normalised Hamming distribution for Masek’s flow using CASIA V1 produced using the generic platform ...... 152 Figure 4.3 – Images from the CASIA V2D1 dataset that have been segmented by Masek’s algorithm...... 156 Figure 4.4 – Image 001_1_1 from CASIA V1 showing segmentation masking by Masek’s algorithm (a) and with threshold applied (b) with no threshold applied ...... 157 Figure 4.5 – Image 0000_10 from CASIA V2D1 showing segmentation masking by Masek’s algorithm (a) and with threshold applied (b) with no threshold applied ...... 158 Figure 4.6 – unwrapped iris (a) and mask (b) of Figure 4.4a and unwrapped iris (c) and mask (d) of Figure 4.4b ...... 159 Figure 4.7 – unwrapped iris (a) and mask (b) of Figure 4.5a and unwrapped iris (c) and mask (d) of Figure 4.5b ...... 160 Figure 4.8 – True match scores for each image dataset when used with Masek’s algorithm using a range of threshold values ...... 165 Figure 4.9 – Graph plotting the number of images in a dataset against the time taken to analyse the recognition performance in minutes using Masek’s matching algorithm within the generic platform ...... 167 Figure 4.10 – Graph plotting the number of pixels in images in each dataset against the average time in seconds taken to create the iris code for images in the dataset using Masek’s matching algorithm within the generic platform ...... 168 Figure 4.11 – Comparison of true match results from Masek’s algorithm with no thresholding (NT) against the true match results attained by Antonino’s algorithm...... 175 Figure 4.12 – Comparison of analysis times from Masek’s algorithm with no thresholding (NT) against the analysis times attained by Antonino’s algorithm...... 177 Figure 4.13 – Images 1_0 (left) and 1_2 (right) from WVU Free image dataset, the yellow ellipses indicate the detected iris boundaries ...... 179
Page 15 of 380 Figure 4.14 – Comparison of true match results from Masek’s algorithm with no thresholding (NT) against the true match results attained by Antonino and Li’s algorithms...... 182 Figure 4.15 – Hamming distance distribution plots for Li’s segmentation algorithm using Normalise One and Masek’s encoding and matching routines...... 185 Figure 4.16 – Comparison of true match results from Masek’s algorithm with no thresholding (NT) against the true match results attained by Antonino and Li’s (Flow, S&N and S) algorithms...... 186 Figure 4.17 – Comparison of true match results from JIRRM against the true match results attained by Antonino, Masek and Li’s algorithms...... 189 Figure 4.18 – Comparison of code creation times with Masek’s algorithm against the times attained by JIRRM and Li’s algorithms...... 190 Figure 4.19 – Comparison of true match results from Masek’s algorithm with no thresholding (NT) against the true match results attained by Antonino, JIRRM, JIRRM with more circles (JIRRM MC) and Li’s algorithms...... 192 Figure 4.20 – Comparison of true match results from Masek’s algorithm with no thresholding (NT) against the true match results attained by Antonino’s encoding and matching algorithm, and JIRRM, OSIRIS and Li’s segmentation algorithms...... 198 Figure 4.21 – Comparison of true match results from Masek, Antonino, JIRRM, OSIRIS and Li’s algorithms...... 199 Figure 4.22 – Iris Recognition with CASIA V1 image 001_1_1.bmp loaded...... 202 Figure 4.23 – High Performance Iris Recognition GUI ...... 205 Figure 4.24 – GIRIST GUI with a library of CASIA V1 loaded ...... 207 Figure 4.25 – GIRIST 1:N search result of CASIA V3 Interval database for an eye from CASIA V1 showing images and segmentation boundaries ...... 208 Figure 4.26 – Sectors used by Lin et al. for determining the Iris location marked in black on image 001_1_1 from CASIA V1 dataset ...... 210 Figure 4.27 – Iris 019_1_1 from CASIA V1 (a) segmented by Shamsi-Lin method (b) average shrinking squares applied to image 019_1_1 to find the pupil centre ...... 212 Figure 4.28 –An example of an incorrectly segmented CASIA V1 image (a) and an example of a segmented image with one incorrect boundary (b) ...... 213 Figure 4.29 – Image 0000_012 from the CASIA V2D1 image dataset with an elliptical border added to mask darker areas from interfering with the pupil detection ...... 215 Figure 4.30 – CASIA V1 image segmented by Lin et al. ’s method ...... 217 Figure 4.31 – Generic platform segmentation panel ...... 219
Page 16 of 380 Figure 4.32 – Generic platform normalisation panel ...... 220 Figure 4.33 – Example edge detection images...... 221 Figure 5.1 – Sample images with their respective canny edge images for low threshold of 0.08, high threshold of 0.15 and sigma 2.5. Each image has it edge image overlaid (a) CASIA V2D1 image 0002_008 and (b) CASIA V1 image 001_1_1...... 242 Figure 5.2 – Images from CASIA V1 segmented by Segment One ...... 245 Figure 5.3 – (a) Image 001/02 from the IITD image dataset and (b) its Otsu threshold image ...... 247 Figure 5.4 – MATLAB Canny edge detection of image 001/02 from the IITD image dataset ...... 248 Figure 5.5 – Figure 5.4 logical AND’ed with Figure 5.3b ...... 248 Figure 5.6 – The left image is fionar1 from the MMU V1 image dataset segmented with the first version of Segment Two, the right image is the monochrome threshold image. . 249 Figure 5.7 – The left image is the Canny edge image of fionar1, the right image is the edge image logical AND’ed with the threshold image in Figure 5.6...... 249 Figure 5.8 – Images from MMU V1 image dataset segmented with Segment Two using Lin et al. ’s iris detection and Masek’s eyelid detection...... 255 Figure 5.9 – Image 0002_002 from CASIA V2D1 gamma corrected with boundaries & sectors marked on it...... 257 Figure 5.10 – Original iris sectors from Figure 5.9...... 257 Figure 5.11 – Intensity stretched Figure 5.10 images ...... 257 Figure 5.12 – Gaussian smoothed Figure 5.11 images ...... 257 Figure 5.13 – Sums of columns for both the left (a) and right (b) sectors respectively. Y- axis indicates sum, x-axis indicates x position within the original sector ...... 258 Figure 5.14 – The gradients of the column sums for both the left (a) and right (b) sectors. Y-axis indicates gradient, x-axis indicates x position within the original sector ...... 259 Figure 5.15 – CASIA V2D1 image 0002_002, segmented using Segment Two ...... 259 Figure 5.16 – Example edge image showing the connected and unconnected edge locations ...... 263 Figure 5.17 – Comparison of true match results from Segment Three against the true match results attained by Segment One, Segment Two, Masek, OSIRIS and Li’s segmentation algorithms...... 267 Figure 5.18 – Generic platform results pane showing the reference segmentation for image 010204 from the MMU V2 image dataset...... 272
Page 17 of 380 Figure 5.19 – Automatic segmentation analysis for Segment Three with the CASIA V2D1 image dataset...... 273 Figure 5.20 – Automatic segmentation analysis for Lin et al. ’s segmentation method with the CASIA V2D1 image dataset...... 273 Figure 5.21 – Daugman’s rubber sheet model [77], translating an elliptic iris from Cartesian to polar coordinates ...... 276 Figure 5.22 – Daugman’s rubber sheet mode1 [77, 31] providing pupil displacement correction showing exaggerated pupil displacement for illustration ...... 277 Figure 5.23 – The results from Flow One compared with those obtained using Segment Two and Three, OSIRIS, Masek and Li’s segmentation algorithms ...... 292 Figure 5.24 - A vector rotated using Masek’s method ...... 299 Figure 5.25 - An extended vector using Match One, with sliding window indicated ...... 299 Figure 5.26 – Average time to find the Hamming distance between two iris codes...... 304 Figure 5.27 – Compare Result Databases GUI ...... 306 Figure 6.1 – Image 010204 from the MMU V2 image dataset, segmented by Segment Three (a) with a fixed gamma correction of 2.2 applied before edge detection, (b) without any gamma correction...... 322 Figure 7.1 – Histogram for bitmap image 001_1_1 from CASIA V1 dataset showing the large peak at intensity 41...... 325 Figure 7.2 – Histogram for image 001_1_1 from CASIA V1 against S1143R01 from CASIA V3i ...... 326 Figure 7.3 – CASIA V2D1 images segmented by Lin et al. ’s method within the generic platform ...... 328 Figure 7.4 – Four accumulators from the three dimensional Hough matrix created by segmenting the CASIA V1 image 001_1_1...... 332 Figure 7.5 – Two accumulators (a) showing widely disparate maxima, (b) showing a focussed maxima at the centre ...... 333 Figure 7.6 – Perpendicular Bisectors ...... 335 Figure 7.7 – Extracting the Iris GUI zip file using the mouse in Windows 7 ...... 350 Figure 7.8 – The zip file extraction dialog window in Windows 7...... 351 Figure 7.9 – The ‘Set Path...’ menu option in MATLAB ...... 352 Figure 7.10 – The ‘Add Folder’ button in the ‘Set Path’ dialog box ...... 352 Figure 7.11 – The ‘Move to Top’ and ‘Save’ buttons in the ‘Set Path’ dialog box ...... 353 Figure 7.12 – An example image database ...... 354 Figure 7.13 – The dialog box for selecting the image database location ...... 354
Page 18 of 380 Figure 7.14 – The dialog box for selecting the result database location ...... 355 Figure 7.15 – The generic platform GUI in Automatic Mode ...... 356 Figure 7.16 – The generic platform GUI in Manual Mode showing the results of a segmentation operation ...... 358 Figure 7.17 – The generic platform GUI in Manual Mode showing the results of a normalisation operation ...... 359 Figure 7.18 – The generic platform GUI in Manual Mode showing the results of an encoding operation ...... 360 Figure 7.19 – The generic platform GUI in Automatic Mode processing the selected image dataset with the selected iris recognition algorithms ...... 361
List of Tables
Table 3.1 – Summary of the image datasets available to this research, the features of each of the datasets and their technical details ...... 116 Table 4.1 – Iris recognition algorithms available for download ...... 140 Table 4.2 – Test machines used to develop and evaluate the generic platform and iris recognition algorithms ...... 141 Table 4.3 – CASIA V1 results, Masek’s claimed results versus generic platform (GP) results in both ‘fixed flow’ and ‘custom flow’ form...... 147 Table 4.4 – Actual image datasets used by Masek ...... 149 Table 4.5 – Masek’s claimed CASIA V1 results with generic platform (GP) results for the reduced 633 and 624 image CASIA V1 datasets and the full CASIA V1 results...... 150 Table 4.6 – Results from Masek’s algorithm with the remaining test datasets...... 155 Table 4.7 – Results of testing Masek’s flow with the segmentation mask thresholding removed ...... 161 Table 4.8 – Results of Masek’s flow with a fixed segmentation mask threshold of 75..... 162 Table 4.9 – Results of Masek’s flow with a fixed segmentation mask threshold of 50..... 163 Table 4.10 – Results of Masek’s flow with a fixed segmentation mask threshold of 25 ... 164 Table 4.11 – Processing times from using Masek’s flow without thresholding. (s) seconds, (h:mm) hours and minutes...... 166 Table 4.12 – Results of Antonino’s flow without a fixed segmentation mask threshold. . 174 Table 4.13 – Processing times from using Antonino’s flow without thresholding. (s) seconds, (h:mm) hours and minutes...... 176 Table 4.14 – The results obtained over ten runs when using Li’s flow with the CASIA V1 image dataset ...... 180 Table 4.15 – The results obtained over ten runs when using Li’s flow with the WVU Free dataset ...... 181 Table 4.16 – Average results of ten runs of Li’s flow with the test datasets within the generic platform from Table 4.14 and Table 4.15 ...... 182 Table 4.17 – Processing times from using Li’s flow for the two datasets that were completed. (s) seconds, (h:mm) hours and minutes ...... 183 Table 4.18 – Results of Li’s segmentation and normalisation algorithms with Masek’s encoding and matching routine...... 184 Table 4.19 – Results of Li’s segmentation algorithm with Normalise One and Masek’s encoding and matching routines...... 185 Page 20 of 380 Table 4.20 – Results from the JIRRM segmentation algorithm with Masek’s normalise, encode and matching routines ...... 188 Table 4.21 – Processing times for the JIRRM algorithm with the remaining test datasets. (s) seconds, (h:mm) hours and minutes...... 190 Table 4.22 – Recognition results from the JIRRM segmentation algorithm using Masek’s upper and lower pupil and iris circle sizes with Masek’s normalise, encode and matching routines...... 191 Table 4.23 – Processing times from the JIRRM algorithm looking for a wider range of circle sizes for both iris and pupil with Masek’s normalise, encode and matching routines. (s) seconds, (h:mm) hours and minutes...... 193 Table 4.24 – Processing times for the C++ OSIRIS executable, total time in hours, minutes and seconds, average time in seconds ...... 195 Table 4.25 – Recognition results from the OSIRIS segmentation algorithm with Masek’s normalise, encode and matching routines...... 197 Table 4.26 – Results of Masek’s algorithm using various edge detection methods for the iris boundary detection on CASIA V1 dataset ...... 222 Table 4.27 – Results of Masek’s algorithm using various edge detection methods for the pupil boundary detection on CASIA V1 dataset ...... 223 Table 4.28 – Results of Masek’s algorithm using the JIRRM Hough for the iris and pupil boundaries in turn ...... 225 Table 4.29 – Total times from running Masek’s algorithm as a ‘fixed flow’ and as a custom flow in MATLAB 2009a x64 on Windows 7 (PC1) against the CASIA V1 and V3i datasets ...... 228 Table 4.30 – Test machines used to develop and evaluate the generic platform and iris recognition algorithms ...... 230 Table 4.31 – Average processing times over five runs of using Segmentation Three and Normalise One with Masek’s encoding and matching code on CASIA V1 dataset ...... 231 Table 4.32 – Average processing times over five runs of using Segmentation Three and Normalise One with Masek’s encoding and matching code on PC1, PC2 and PC3 using the CASIA V2D1 dataset ...... 231 Table 4.33 – Average processing times over five runs of using Segmentation Three and Normalise One with Masek’s encoding and matching code on PC1, PC2 and PC3 using the CASIA V3i dataset ...... 232 Table 4.34 – Time for code creation using Masek’s algorithm on the CASIA V1 dataset, without and with pre-calculation on PC1 ...... 234
Page 21 of 380 Table 5.1 – Recognition results from Segment One using Normalise One, Segmentation Mask One, Masek’s encode and Match One ...... 243 Table 5.2 – Code creation times of Segment One using Normalise One, Segmentation Mask One and Masek’s encode...... 244 Table 5.3 – Pixel count and averages for final edge image, without gamma correction, with fixed gamma correction and using gamma correction with adaptive intensity thresholding ...... 250 Table 5.4 – Recognition results from Segment Two using Normalise One, Segmentation Mask One, Masek’s encode, Lin et al. ’s iris boundary detection and Match One ...... 253 Table 5.5 – Code creation times of Segment Two using Normalise One, Segmentation Mask One and Masek’s encode...... 254 Table 5.6 – Recognition results from Segment Two using the described iris boundary detection, Normalise One, Segmentation Mask One, Masek’s encode and Match One .... 260 Table 5.7 – Recognition results achieved with Segment Three using Normalise One, Segmentation Mask One and Match One with Masek’s encoding algorithm ...... 266 Table 5.8 – Code creation times attained using Segment Three with Normalise One and Masek’s encoding algorithm ...... 268 Table 5.9 – Recognition results from Segment Three, Normalise One with the intelligent infilling algorithm, Match One and Masek’s encoding routine ...... 283 Table 5.10 – Recognition results from Segment Three, Normalise One with the OSIRIS infilling algorithm, Match One and Masek’s encoding routine ...... 284 Table 5.11 – True match results from applying the standard deviation thresholds to the unwrapped irises attained from using Normalise One with Segment Three, Segmentation Mask One, Masek’s encode and Match One...... 286 Table 5.12 – Separability results from applying the standard deviation thresholds to the unwrapped irises attained from using Normalise One with Segment Three, Segmentation Mask One, Masek’s encode and Match One...... 286 Table 5.13 – Decidability results from applying the standard deviation thresholds to the unwrapped irises attained from using Normalise One with Segment Three, Segmentation Mask One, Masek’s encode and Match One...... 287 Table 5.14 – EER results from applying the standard deviation thresholds to the unwrapped irises attained from using Normalise One with Segment Three, Segmentation Mask One, Masek’s encode and Match One...... 287
Page 22 of 380 Table 5.15 – Recognition results from Normalise One using oversampling with the developed iris boundary detection, Segmentation Mask One, Masek’s encode and Match One ...... 290 Table 5.16 – Recognition results from Normalise Two using the combined normalisation improvements with Segment Three, Segmentation Mask One, Masek’s encode and Match One ...... 293 Table 5.17 – Time to analyse each dataset using Masek’s Hamming match (hours:minutes) ...... 297 Table 5.18 Comparison of the difference calculations versus the HD. A and B are the input data |A - B| is the magnitude of the difference calculation, A B is the logical XOR of A ⊕ and B in binary, and the last column shows the HD bit count result from the A B ⊕ column...... 302 Table 5.19 – Analysis times using Masek’s Hamming match versus Match One ...... 303 Table 6.1 – Summary of average recognition results and average code creation time per image for all the algorithms tested ...... 317 Masek’s results in Table 6.2 are highlighted as they were used as the benchmark during testing. Li’s results are shaded because the averages shown only include results from the two image datasets that Li’s algorithm completed processing on, thus they do not represent a fair comparison to the other results. The results from Lin, Lu and Shamsi-Lin are shaded because these algorithms did not successfully segment images so the results do not present a useful comparison of capability...... 317 Table 6.3 – Comparison of the total analysis time when using Match One as compared to using Masek or Antonino’s algorithms ...... 318 Table 7.1 – Description of the generic platform GUI regions from Figure 7.15 ...... 357
List of Image Datasets
Table 1 – The iris image datasets available to this research
Dataset Resolution (X × Y) Number of Images Number of Classes
CASIA V1 320×280 756 108
CASIA V2D1 640×480 1,200 60
CASIA V2D2 640×480 1,200 60
CASIA V3i 320×280 2,639 395
CASIA V3L 640×480 16,625 819
CASIA V3t 640×480 3,284 400
CASIA V4d 2352×1728 2,567 284
CASIA V4t 640×480 19,980 2,000
CASIA V4s 640×480 9,990 1,000
IITD 320×240 2,240 224
MMU V1 320×240 450 90
MMU V2 320×238 995 199
UBIRIS V1 800×600 1,249 241
UBIRIS V2 400×300 11,101 522
UPOL 786×576 384 128
ICE 2005 640×480 3953 244
WVU 640×480 865 200
WVU Free 640×480 80 20
Full descriptions of the image datasets listed in Table 1 are provided in Section 3.7.
List of Symbols Page 24 of 380
List of Symbols
Convolution operation. ∗
/ AND Logical, bitwise AND operation ∩
Wavelet size
Square of the vector length between two circle centres
Angle of the vector between two circle centres
Wavelet size
Iris code
Iris code
Discrete Cosine Transform (DCT) result coefficients