<<

A Novel Segmentation and Eyebrow Shape-based Identification

T. Hoang Ngan Le, Utsav Prabhu, Marios Savvides Electrical and Computer Engineering Department, Carnegie Mellon University, Pittsburgh, PA 15213, US A {thihoanl,uprabhu,marioss}@andrew.cmu.edu

Abstract (a) Recent studies in biometrics have shown that the peri­ ocular region of the face is sufficiently discriminative for robust recognition, and particularly effectivein certain sce­ narios such as extreme occlusions, and illumination vari­ (b) ations where traditional face recognition systems are un­ reliable. In this paper, we first propose a fully automatic, robust and fast graph-cut based eyebrow segmentation tech­ nique to extract the eyebrow shape froma given face image. (e) We then proposean eyebrow shape-based identification sys­ tem for periocular face recognition. Our experiments have been conducted over large datasets fromthe MBGC and AR databases and the resilience of the proposed approach has been evaluated under varying data conditions. The exper­ imental results show that the proposed eyebrow segmenta­ (d) tion achieves high accuracy with an F-Measure of 99.4% and the identification system achieves ratesof 76. 0% on the Figure Some examples of occlusion faces (a), inter-subject AR database and 85.0% on the MBGC database. l. structural dissimilarities between (b), intra-subject asymmetry dissimilarities(c), eyebrow shape is quite robust to imaging conditions (d) 1. Introduction

The ongoing widespread deployment of biometric sys­ tems for person identification, verification and access con­ metric features (i.e. the , , eyebrows, and trol clearly indicate the impact of these techniques on hu­ the neighboring area), have been suggested as a poten­ man society. These systems primarily use automated meth­ tial source for such robust features, either individually or ods for robust measurement and analysis of physiological in conjunction with face recognition techniques. Eyebrows, and/or behavioral traits to obtain unique and stable features in particular, exhibit a wide variety of shapes, colors, and towards recognition. The most popular and well studied textures, and their appearance is influenced by both genetic methods utilize stable characteristics such as fingerprints, and behavioural characteristics, indicating that they can po­ iris patterns, face images, and voice features. However, tentially be used as a biometric feature. We observe that most of these characteristics are prone to certain impedi­ the periocular region of the face, including the eyebrows, ments, resulting in poor performance under non-ideal con­ is unoccluded under many real-world scenarios where face ditions; for example, it is well known that motion blur, recognition systems fail, such as with headgear, mask oc­ occlusions, pose variations, facial expressions and illumi­ clusions, surveillance videos in crowds, etc. Some exam­ nations all significantly degrade the performance of face ples are shown in Figure lea). Furthermore, eyebrows of recognition engines. Consequently, researchers have been different people are observably dissimilar, particularly in exploring many alternative biometric attributes and modal­ width, length, and boundary shape. Figure l(b) shows an ities which are able to endure such conditions without sig­ example of various eyebrow shapes obtained from different nificant performance degradation. Recently, periocular bio- subjects is our second observation. Eyebrows of an individ- ual are typically intrinsically asymmetric (as the old adage eyebrows have recently attracted the attention of many re­ says: "eyebrows should be sisters, not twins"). Thirdly, the searchers in biometrics. Our literature review shows that type and degree of this asymmetry varies between people, eyebrows have been used by many forensic analysts for hence providing a biometric feature by itself. Examples of years to aid in facial recognition tasks, especially when the this asymmetry are shown in Figure l(c). Finally, while face is partially occluded, or when only the periocular re­ color and texture appearances of eyebrows can be severely gion is clearly visible, such as in crowd images. Sadr et influenced by imaging conditions, we believe that eyebrow al. [18] proved that eyebrows are the most salient and sta­ shapes are typically resilient to these conditions because ble features in a face and play an important role in they are characterized by salient contour edges which are human identification. They found that the absence of eye­ observable under most such conditions. An example of brows in familiar faces leads to a significant level of con­ an person in different lighting conditions is given in Fig­ fusion in identification/recognition. Furthermore, they also ure l(d) where the eyebrow texture changes between panels showed that the eyebrows can be considered to be as robust while the eyebrow shape remains invariant to the illumina­ a feature as the eyes for face recognition. In [1], Bruce, tion changes. It should be noted that due to lack of perma­ et al. concluded that the eyebrows and skin texture cues nence, the eyebrow shape can be considered to be more of a play an important role in gender discrimination. By com­ soft-biometric feature, particularly since conunon cosmetic bining fingerprints with face recognition to form a multi­ procedures can sometimes alter the shape. In this paper, we modal biometrics system, Rahal, et al. in [17] showed that focus on two main objectives: geometry and shape of the eyebrows (such as length, area, angle of each eyebrow, distance between two eyebrows) are 1. Construction of a robust, fast and fully automatic seg­ useful features for face recognition. Instead of using whole mentation approach to extract the eyebrow from a eyebrow region, Li, et al. [lO], [11] selected and cropped given face image. the pure eyebrows. They extracted textural features from the eyebrow and constructed a recognition system which 2. An examination of the contributions of shape struc­ has been tested on a small database of 32 subjects in and tures of eyebrows on both inter-subject structural dis­ 27 subjects in [11] using a k-means and HMM classifiers similarities and intra-subject asymmetry dissimilarities respectively in the Fourier space. Making use of the eye­ towards face recognition (eyebrow shape-based identi­ brows shape concept, Paleari, et al. [15] used the internal fication system). and external positions of both right and left eyebrows to­ gether with other features such as mouth, nose, eye, fore­ To the best of our knowledge, this is the first instant of head, and in order to identify subjects. Recently, Dong, an eyebrow-based recognition system which does not re­ et al. [4] investigated the use of shape-based eyebrows quire any manual intervention. An additional contribution features under non-ideal imaging conditions for biometric is the introduction of the previously unexplored influenceof recognition and gender classification. In their work, vari­ eyebrow asymmetry as a weak biometric feature. ous shape-based features from the eyebrows images are ex­ The rest of this paper is organized as follows: In section tracted and compared by three different classifiers: Mini­ 2, we review some previous efforts carried out on identifi­ mum Distance Classifier(MD), Linear Discriminant Analy­ cation and recognition using eyebrow-based matching and sis Classifier (LDA) and Support Vector Machine Classifier on eyebrow segmentation. Section 3 presents our proposed (SVM). eyebrow segmentation approach together with a Local Eye­ brow Active Shape Model (LE-ASM) with 64 landmarks In order to perform eyebrows shape analysis, the eye­ points on eyebrow region. Section 4 describes the proposed brows must be exactly segmented from a given face image. eyebrow shape-based identification system with two ap­ In [8], Kapoor and Picard used the pupil position which was proaches: inter-subject structural dissimilarities and intra­ tracked by an infrared sensitive camera equipped with in­ subject eyebrow asymmetry dissimilarities. Section 5 de­ frared LEDs to extract the images of eyes and eyebrows. In scribes the datasets used to conduct the experiments along their system, template parameters are recovered by Princi­ with the experimental results. Finally, we present some con­ pal Component Analysis (PCA) analysis on these extracted clusions on this work in Section 5.2.3. images using PCA basis. The PCA basis was constructed during the training phase from some example images. In [2], Chen, et al. first segmented rough eyebrows regions 2. Prior Work using a spatially constrained sub-area K-means clustering A principal challenge in face recognition has been the technique and then extracted the eyebrows by the Snake search for robust and unique facial characteristics which are method. Their work is based on some assumptions of eye able to discriminate between individuals. In addition to an­ corners, the upper eye boundary and the position points of alyzing stable features such as fingerprint, face, and iris, the eyebrows. Recently, Ding, et al. [9] [3] used subclass divisions in order to represent a distinct construction of the same facial component and its context. To divide the train­ ing sample into subclasses, their first algorithm is based on a discriminant analysis formulation whereas the second one is an extension of the AdaBoost approach. However, their Figure 2. LE-ASM with 64 landmarks points on eyebrow region. approaches required a highly accurate detection of the facial components such as nose and eye mid-points which may not possible under non-ideal situations. These previous works have demonstrated the signifi­ cance of the eyebrows region and eyebrow shape, partic­ Figure 3. LE-ASM fitting results on eyebrows region of images ularly the latter, in subject identification. However, an anal­ from the MBGC database. ysis of these techniques demonstrates that they have the fol­ lowing common disadvantages: this work, which uses 64 predefined points on the eyebrows region. It is to be noted that LE-ASM can locate facial land­ • All eyebrow shape analysis techniques are required to marks in a fairly accurate fashion even on images that ex­ first isolate the eyebrow region on the face image and hibit varying illumination and slight pose variation and that then segment the eyebrow shape. However, shape seg­ LE-ASM is also tolerant to the scale and translation errors mentation is a challenging problem in the above tech­ during face detection. Our LE-ASM system is trained on niques. They all either provide this segmentation man­ 500 images from the MBGC database [16]. Figure 3 shows ually or provide a manual initialization to an automatic some example results of eyebrow regions that are accurately segmentation method [10], [11], [4]. fitted by our LE-ASM.

• The experimental results in these works have been In our proposed segmentation approach, given an im­ conducted on very small datasets, making it difficult age I, each pixel is presented by four components: labell, to evaluate their efficacy in real-world scenarios: the strength s, location x and y. At the initial stage, each pixel largest dataset among them contains 493 images of 100 I(x, y) at position (x,y) can be labeled as either -1, 1, or 0 subjects [4]. corresponding to the background cluster, foreground clus­ ter, or unclassified cluster, respectively. The strength of the • The features extracted by most of these techniques are pixel is defined as the pixel intensity. The purpose of seg­ simple heuristic shape or texture features [17], [4]. mentation is to label all the pixels in the unclassified cluster While compact, these features are un-optimized and as either -l or l. The label of each pixel at location (x, y) often discard important discriminative information. of the image I is initialized as follows: • In some of these works, many assumptions are made }} about reference points on the eyebrows (which are ill­ I I(x, y) E fOregrOUnd defined), and on other components of the face (which l p = {-1 I(x, y) E background may not be visible)[4] [6]. P(l,s,x,y) � 0 I(x,y) E unclassified { ps = I(x, y)) 3. Eyebrow Segmentation (1) For each pixel Po(l,s,x,y) in the unclassified clus­ Segmenting the eyebrow from a given image is a signif­ ter, we first search for two neighbors PI (l,s, x,y) and icant task in eyebrow shape analysis. Before segmenting P (l,s, x, y) which satisfy the following conditions: the eyebrow in a given face image, the eyebrow region must 2 be detected. In order to automatically localize landmarks • PI (l,s, x, y) belongs to the foreground cluster and the on eyebrows region, we propose a Local Eyebrow Active edge from Po to PI is the shortest path from Po to the Shape Model (LE-ASM) using 2D profiles and 2D search foreground cluster. regions. LE-MASM is based on a Modified Active Shape Model (MASM) framework [7] where at the test stage, a 2D • P2(l,s, x, y) belongs to the background cluster and the profile around a potential landmark is projected onto a sub­ edge from Po to P2 is the shortest path from Po to the space to obtain a vector of projection coefficientsand is then background cluster. reconstructed. The reconstruction error between this recon­ structed and the original profile is calculated. The candi­ The edge weight is defined as pixel intensity difference date location with the lowest reconstruction error is the one between pixels P and Q and is given in (2): that is used as the new location for the landmark. Figure 2 demonstrates the dense landmarking scheme designed for (2) The distance is defined as position difference between pix­ els P and Q and is given in (3):

(3)

In addition to the edge weight and the distance, we also consider the smoothness of each cluster. The smoothness of the foreground (SI) and background (S2) are determined as in (4) and (5), respectively. Figure 4. An visualized illustration of how to initialize seeds for 2 foreground and background of the right eyebrow region using pro­ SI = r (Pg -I(x, y)) dxdy (4) ) foreground posed LE-ASM. The region with red diagonal lines is foreground. The region with black dash diagonal lines is background. The gray region is unclassified region. S = r (Pg -I(x, y))2dxdy (5) 2 Jbackground In our proposed segmentation algorithm, we also define a decreasing function g(P, Q) between pixels P and Q. It is bounded in the range of (0, 1] and is given in (6).

1 Figure 5. Some examples of eyebrow segmentation on face im­ g(P, Q) = (6) 1 +D (P, Q) ages different and various illumination condi­ tions from MBGC database [16]. The attraction between pixels P and Q is defined as in (7) I E(P, Q) outer contour is outlined by horizontally expanding p and A(P, Q) = E(P, Q)g(P, Q) = (7) 1 +D(P, Q) p6 to 6. Figure 4 gives an visualized illustration of the ini­ tialization seeds for foreground and background of the right The label of the pixel Po is assigned either foreground eyebrow region in which the foreground is marked in red (" 1") or background ("-1") based on following equation in diagonal lines and the background is marked as black di­ (8). agonal lines. Our final goal is to assign the pixels in the unclassified region. We accomplish this with the proposed { +1 if A(Po, PI) -SI 2: A(Po, P2) -S2} pJ = algorithm in (8). -1 if A(Po, P2) -S2 > A(Po, Pd -SI To evaluate our segmentation algorithm, we use 200 im­ (8) ages of 50 subjects from the MBGC database [16] and con­ The proposed graph-based segmentation algorithm is ap­ sider the segmentation problem as a binary classification plied to partition an eyebrow region into either facial problem. We use the F-Measure metric to evaluate per­ or non-hair. Under the viewpoint of binary segmentation, formance. The correctness of a segmentation algorithm is the eyebrow (facial hair) is considered foreground and the evaluated by (a) computing the number of correctly recog­ skin (non-hair) is considered as background. At the initial nized class samples (true positives - TP), (b) calculating the stage, we first utilize the seeds for foreground and back­ number of correctly recognized samples that do not belong ground by our proposed LE-ASM with 64 landmarks de­ I to the class (true negatives - TN), and (c) finding the sam­ noted as p , 2 Taking the right eyebrow region p , ..., p64. ples that are either incorrectly assigned to the class (false for example, the upper limit of foreground is defined by positives - FP) or that are not recognized as class samples mid-way between landmarks p27:31 and landmarks p32:36 (false negatives - FN). The F-Measure evaluates how well whereas the lower limit of foreground is defined by mid­ an algorithm can retrieve the desired pixels and is defined way between landmarks p27:31 and landmarks p2l:25. The in (9): seeds for background are defined by two contours: the inner contour and the outer contour. The upper limit of the inner 2 x precision x recall F-measure = (9) contour is landmarks p37:42 and the lower limit of the in­ precision +recall ner contour is defined as mid-way between landmarks pl:6

2 2 = and landmarks p 1: 6. The outer contour is based on the where the precision is calculated as precision TP�FP and interocular distance 26 between two corners of the eye. The the recall is the proportion of actual positives that are pre­ upper limit of the outer contour is outlined by expanding dicted to be positive and is determined by = recall TP�FN' landmarks p37:42 to �. The lower limit of the outer con­ The F-measure of our proposed eyebrow segmentation l tour is defined by landmarks p :6. The right and left of the achieves 99.4% on our test dataset. Some examples of eye- 4.1.1 Intra-subject asymmetry dissimilarities

Intrinsic and extrinsic facial asymmetry is COlmnon in hu­ mans and has been used in many biometric applications. In­ trinsic facial asymmetry is caused by changes that occur to the structure of the face as a result of multiple factors in­ Figure 6. Some examples of eyebrow segmentation on face images cluding growth, injury and age-related change. Extrinsic with different facial expression and various illuminations from AR facial asymmetry is caused by external factors such as view­ database [13]. ing orientation, illumination variation, etc. In our work, we only focus on frontal faces and the main emphasis of our approach is on harnessing intrinsic eyebrow shape asym­ brows segmentation results are shown in Figures 5 and 6, metry to identify the subject by making using of Procrustes under different illuminations and facial expressions. analysis [5]. We apply Procrustes analysis on two images: the left x 4. Eyebrow Shape-based Identification eyebrow XL E lRm n and the mirrored right eyebrow x XR E lRm n (obtained by reflecting the right eyebrow). In this section, we present an analysis on the shape This gives us the translation, reflection, orthogonal rotation, structure of the eyebrow towards the identification prob­ and scaling [c T and b], in which c is a translation compo­ lem. The contribution of eyebrow shape is considered under nent, T is an orthogonal rotation and reflection component, both inter-subject structural dissimilarities and intra-subject and b is a scaling component. The algorithm can be sum­ asymmetry dissimilarities. marized by following steps:

4.1. Inter-subject structural dissimilarities • Step 1: Center the images XL, XR at the origin by subtracting the means. Shape is commonly defined in terms of the set of con­ tours that describe the boundary of an object and considered • Step 2: Apply Singular Value Decomposition (SVD) as a key feature for many computer vision applications. In onto matrix A = XI x XR: the shape-based recognition approach, the measurement of T [U, D, V j = SVD(A). similarity is preceded by assigning a descriptor (or a shape context) to each point, which is able to capture the distribu­ • Step 3: The orthogonal rotation and reflection compo­ T T tion of all points relative to and except from itself. Corre­ nents are computed as T = V X U . sponding points on two similar shapes will be described by similar shape contexts. In this paper, we make use of the • Step 4: The scaling component is calculated as: b = shape matching method proposed by Ling and Jacobs [12]. trace(XL xXI) L diag(D) trace(Xn x x"h) . Let us consider the dissimilarity between two eyebrow

n shapes P and Q. We sample points PI, P2, ... , Pn on the • Step 5: The translation component is: eyebrow shape P, and m points l, 2, ... , qm on the eye­ q q c = XL - (b X XR x T). brow shape Q. The cost of matching a point Pi on the eye­ brow shape P to a point qj on the eyebrow shape Q is de­ where XL and XR are the mean vector of XL and XR, fined by chi-square distance as follows: respectively. By using Procrustes algorithm, the eyebrow shape asymmetry of each face image is presented by a vec­ tor defined in (12) (10)

f = XL - (b X XR X T +( c x l)) (12)

where and are the shape context histograms of c hp,i hQ,j Pi where 1 is a row vector of ones and ( xl) is a matrix and qj respectively, and K is the number of histogram bins. with constant values in each column. The Procrustes vec­ The shape context histograms of Pi and qj are defined as: tor of images from probe and gallery set are extracted and compared by Cosine distance. hp i(k) = #{Ph: h i= i and (Ph - Pi) E bin(k)} , (11) = n 5. Datasets and Experimental Results hQ,j(k) #{qh : h i= j and (% - qj) E bi (k)} 5.1. Datasets The objective of matching from P to Q is to minimize the total matching cost Li C(Pi, qp(i») in which p( i) is a per­ The data sets used in our experimental results are mutation. the NIST Multiple Biometric Grand Challenge (MBGC) Table 1. The first group of six subsets on small size from AR and MBGC database Dataset Gallery Appearance (100 images) Probe Appearance (100 images) Total # images ARl Neutral Neutral 200 images AR:.! Neutral Smiling 200 images 0 AR Neutral Scream + Anger + Closed Eyes 200 images AR4 Neutral Occlusion 200 images MBGC1 Neutral Neutral 200 images MBGC" Neutral Smiling 200 images

Table 2. The second group of two subsets on large size from AR and MBGC databases

Dataset Gallery Appearance # images Probe Appearance # images Total # images Neutral + Smiling + Neutral + Smiling + 5 AR Scream + Anger + 1,000 images Scream + Anger + 1,000 images 2,000 images Occlusion + Closed Eyes Occlusion + Closed Eyes MBGC0 Neutral + Smiling 2,000 images Neutral + Smiling 2,000 images 4,000 images

1 LeftEye 0.9 - [::=:]Righi Eye 0.8 I _ Left+Righl 1 9 Intra-Eyebrow Asymmet 0.7 I _ � 8 0.6 7 0.5 6

0.4 5

0.3 4

3 0.2 2 1 I�I 1 0 I AR1 AR2 AR3 AR4 MBGC1 MBGC2 AR5 MBGC3 AR1 AR2 AR3 AR4 MBGC1 MBGC2 AR4 MBGC3 Databases Databases

(a) Inter-eyebrow shape structural dissimilarity (b) Intra-eyebrow shape asymmetry Figure 7. Rank one identification rate from the first two experiments,conducted on the different data subsets.

_Fealurel:EyebrowShapeMatming _Featu,e l-EyebrowShape MatchIng _Feature 1: EyebrowShape Matchiog _Feature 2: EyebrowShapeilsymmetry _Fealurel:EyebrowShapeMatching 1: 3: 01 _Feature 2. EyebrowShapeAsymmelry _Feature EyebrowShape Asymme1rt _Feature CombinatJon both _Fealure2:EyebrowShapeAsymmetry _Featu,,, 3. Combination 01 both _Fealure 3: Combination of both _Feature 3: CombinatJon ol both o."L --==+:S;�:=;=�. 4 --7--7 , , , . " 5 6 7 8 9 10 5 6 7 8 9 10 5 6 7 8 9 10 Rani

(a) (b) (c) (d)

" • 150_6 � 05 � 0.4

_Featu,e'. EyebrowShapeMatching _Featu,e'.EyebrowShapeMatching ....Feature 1. Eyebrow Shape MalChing ....Fealure l: EyebrowShapeMatching _Featu,e2.EyebrowShapeAsym

(e) (f) (g) (h) Figure 8. CMC curves of identification rate in the third experiments using the combination of inter-eyebrow shape structural dissimilarity 2 3 4 and intra-eyebrow shape asymmetry are conducted on the first group of dataset (Table 1) (a): AR 1, (b): AR , (c): AR , (d): AR , (e): 2 5 3 MBGC 1 , (f): MBGC , and second group of dataset (Table 2) (g): AR , (h): MBGC database [16], and the AR Face Database [13]. To ensure tains six subsets of relatively small size: the first four sub­ a fair experimental setup, we divided this data into two sets are from the AR database, the last two subsets are from groups containing eight subsets in all. The first group con- the MBGC database. Each subset in the first group con- Table 3. Identification Rate at rank ONE on eight subsets of three experiments: the fist experiment using the eyebrow shape matching only, using the second feature only, using combination of two features, respectively ARl AR� AR0 AR" ARb MBGC1 MBGC� MBGC0 Inter-eyebrow shape structural dissimilarity 52.0% 73.0% 25.0% 56.6% 52.2% 76.0% 72.0% 56. 7% Intra-eyebrow shape asymmetry dissimilarity 43.0% 62.0% 13.0% 43.4% 40.3% 71.0% 59. 0% 63.2% The combination of two features 58. 0% 76. 0% 27.0% 61.6% 54.4% 85.0% 78. 0% 71.3%

sists of 200 images from 100 subjects and is divided into bining the matching scores from both left and right eye­ gallery and probe groups. The neutral face images are in brows. Furthermore, the rank one identification score when the gallery group whereas the probe group contains differ­ using eyebrow shape matching of both left and right eye­ ent appearances such as neutral, smiling, angry, scream­ brows is given in the first row of Table 3 in which the best ing, closed eyes, open mouthed, and occluded faces. The inter-subject structural dissimilarity test achieves 76. 0% on dataset of the fist group is described in Table 1. The sec­ a small dataset and 56. 7% on a large dataset. ond group contains large datasets as given in Table 2 in which the first subset is from the AR database with 2,000 5.2.2 Experiment 2 images of 100 subjects whereas the second subset is from the MBGC database with 4,000 images of 200 subjects. In this experiment, the intra-eyebrow shape asymmetry is The gallery and probe in each subset of the second group used for subject identification. The rank one identification includes various appearances as noted. The images used rates for the various data subsets are shown in Figure 7 (b). in these experiments from the AR database contain several As shown in these figures, the eyebrow asymmetry-based expressions (smiling, angry, screaming, occluded, illumi­ identification is robust to smiling and occlusion appear­ nated, etc), whereas the images from the MBGC database ances. Similar to the first experiment, the eyebrow shape contain two facial expressions: neutral and smiling. asymmetry-based identificationis quite sensitive to anger & screaming facial expressions. As shown in Figure 7 (b), the 5.2. Experimental Results eyebrow shape asymmetry-based identification rate is quite Our eyebrow shape-based recognition performance is promising even thought it is little lower than inter-eyebrow depicted in the Cumulative Match Characteristic (CMC) shape structural dissimilarity. The second row of the Table curves [14] and is verified by three experiments. The first 3 shows the rank one identification when using the eyebrow experiment is built upon the first approach by eyebrow shape asynunetrybetween the lefteyebrow and the mirrored shape matching using inter-subject structural dissimilari­ image of the right eyebrow. ties. The second experiment is set upon the second ap­ proach of intra-subject asymmetry dissimilarities. The third 5.2.3 Experiment 3 experiment is conducted by the combination of these two approaches. Notably, CMC is based on the ranking of each In this experiment, the performance of our proposed eye­ of the gallery image with respect to the probe, thus resulting brow shape-based identification approach is boosted by a combination of two features, inter-eyebrow shape struc­ in the expectation of the correct match being at rank R. tural dissimilarity and intra eyebrow shape asymmetry. As shown in Figure 8(a-f) and Figure 8(g)(h) corresponding to Experiment 5.2.1 1 two groups of datasets, the identification accuracy of using In this experiment, each subset is examined using three sep­ first feature is higher than using the second feature in gen­ arate cases: the left eyebrow, the right eyebrow, and both eral. However, the combination of two features is able to left and right eyebrows. The rank one identification rates significantly boost the identification scores. The rank one of these three cases for all the datasets are shown in Figure identification rate of our proposed system with three exper­ 7 (a). Interestingly, we find that the significance of the left iments is shown in Table 3 in which the empirical results eyebrow and right eyebrow are not equal, indicating that the are shown in the third row. left eyebrow plays a more important role than the right one The best identification rate we achieved for the small in many individual cases and vice versa. The results clearly subset of the AR database is 76. 0% while that for the small indicate that our proposed eyebrow shape-based identifica­ subset of the MBGC database is 85.0% using the combina­ tion approach is robust to certain appearances such as smil­ tion of both features. On the very large subsets of MBGC ing, occlusions, eye conditions (opened or closed), and database which contain both neutral and smiling faces in conditions (opened, smiling, closed). However, it is quite gallery set and probe set, the recognition accuracy archives sensitive to certain appearances such as anger and scream­ up to 71. 3%. After excluding images with dark (black) ing. Impressively, the identification rate is boosted by com- frame sunglasses, and with long fore hair occlusions, we conducted the experiment on most images in AR database, [5] J. C. Gower. Generalized Procrustes Analysis. Psychome­ the recognition accuracy achieved is up to 54. 4%. trika, 40(1):33-51,Mar. 1975. [6] L. W. J. Song and W. Wang. Eyebrow Segmentation Based Conclusion on Binary Edge Image. In Lecture Notes in Computer Sci­ ence, volume 7389,pages 350-356,2012. In this paper, we have proposed a robust and fast ap­ [7] K. Seshadri and M. Savvides. An Analysis of the Sensitiv­ proach to eyebrow segmentation which is able to precisely ity of Active Shape Models to Initialization when Applied extract the eyebrow shape from a given face image. We to Automatic Facial Landmarking. IEEE Transactions on furthermore developed a fully automatic biometric identifi­ Information Forensics and Security (TIFS), 7(4): 1255-1269, cation system which is able to recognize subjects with high Aug. 2012. accuracy using only the eyebrow shape. The proposed eye­ [8] A. Kapoor and R. W. Picard. Real-time, fully Automatic brow segmentation algorithm has been tested on 200 im­ Upper Facial Feature Tracking. In Proceedings of 5th Inter­ ages of 50 subjects from the MBGC database and the sys­ national Conference on Automatic Face and Recog­ tem achieved an F-measure of 99.4%. The proposed eye­ nition, pages 8-13, 2002. brow shape-based identification system has been tested on [9] A. Kapoor and R. W. Picard. Precise Detailed Detection of two different databases (AR and MBGC databases) with Faces and Facial Features. In Proceedings of IEEE Con­ various size subsets of data (200, 2000, 4000 images), dis­ ference on Computer Vision and Pa ttern Recognition, pages similar subjects (100 and 200 subjects), facial expressions 23-28,2008. (neutral, smiling, anger, scream, close eyes), occlusions and [10] Y. Li and C. Fu. Eyebrow Recognition: A New Biometric illumination variations. We have experimentally verified Technique. In Proceedings of the 9th lASTED International the contributions of each eyebrow towards recognition, and Conference on Signal and Image Processing, pages 506-510, have demonstrated that the recognition accuracy is boosted 2007. by combining the matching scores from both the left and [11] Y. Li and X. Li. Hmm Based Eyebrow Recognition. In Pro­ the right eyebrows. We introduced a novel eyebrow shape ceedings of the 3Td International Conference on Intelligent asymmetry feature and our empirical results have proved Information Hiding and Multimedia Signal Processing, vol­ ume 1,pages l35-138, 2007. that this feature is useful in subject recognition. The pro­ posed complete automatic eyebrow shape-based recogni­ [12] H. Ling and D. W. Jacobs. Shape Classification Using the tion system is improved by merging these two features to­ Inner-Distance. IEEE Transactions on Pa ttern Analysis and Machine Intelligence (TPAMJ), 29(2):286-299,2007. gether. On the AR database, we achieved an identification rate of up to 76. 0% on a small size subset and 54. 4% on a [l3] A. M. Martinez and R. Benavente. The AR Face Database. In CVC Technical Report, June 1998. large size subset. On the MBGC database, we achieved an identification rate of up to 85. 0% on a small size subset and [14] H. Moon and P. J. Phillips. Computational and Performance 71. 3% on a large size subset. Aspects of pca-based Face-recognition Algorithms. Percep­ tion, 30(3):303-321,2001. References [15] M. Paleari, C. Velardo, B. Huet, and J. Dugelay. Face Dy­ namics for Biometric People Recognition. In IEEE Inter­ [1] V. Bruce, A. M. Burton, E. Hanna, P. Healey, O. Mason, national Workshop on Multimedia Signal Processing, pages A. Coombes, R. Fright, and A. Linney. Sex Discrimination: 1-5,2009. How do We tell the Difference Between Male and Female [16] P. J. Phillips, P. J. Flynn, J. R. Beveridge, W. T. Scrugs, Faces? , 22:131-152,1993. A. J. O'Toole, D. Bolme, K. W. Bowyer, B. A. Draper, [2] Q. Chen, W. Cham, and K. Lee. Extracting Eyebrow Con­ G. H. Givens, Y. M. Lui, H. Sahibzada, J. A. Scallan III, tour and Contour for Face Recognition. Pattern Recog­ and S. Weimer. Overview of the Multiple Biometrics Grand nition, 40:2292-2300,2007. Challenge. In Proceedings of the 3Td IAPRlIEEE Interna­ [3] L. Ding and A. M. Martinez. Features Versus Context: tional Conference on Biometrics, pages 705-714,June 2009. An Approach for Precise and Detailed Detection and De­ [17] S. Rahal, H. Aboalsamah, and K. Muteb. Multimodal Bio­ lineation of Faces and Facial Features. IEEE Transactions metric Authentication System - MBAS. In Proceedings on Pattern Analysis and Machine Intelligence, 32(11):2022- of the Information and Communication Technologies, vol­ 2038,2010. ume 1,pages 1026-1030, 2006. [4] Y. Dong and D. L. Woodard. Eyebrow Shape-based Features for Biometric Recognition and Gender Classification: A Fea­ [18] J. Sadr,I. Jarudi,and P. Sinha. The Role of Eyebrows in Face sibility Study. In Proceedings of the IEEE InternationalJoint Recognition. Pe rception, 32(3):285-293, 2003. Conference on Biometrics (UCB), pages 1-8, Aug. 2011.