Services

A Variability Modeling Method for Facial Authentication

Description
Abstract — Most biometric authentication methods have been developed under the assumption that the extracted features that participate in the authentication process are fixed. But the quality and accessibility of biometric features face challenges
Categories
Published
of 13
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
  A Variability Modeling Method for FacialAuthentication Obaidul Malek Center for Biometrics and Biomedical Research, VARabita AlamgirCenter for Biometrics and Biomedical Research, VAMohammad MatinUniversity of Denver, COLaila AlamgirHoward University, DC  Abstract —Most biometric authentication methods havebeen developed under the assumption that the extracted fea-tures that participate in the authentication process are fixed.But the quality and accessibility of biometric features facechallenges due to position orientation, illumination, and facialexpression effects. This paper addresses the predominantdeficiencies in this regard and systematically investigates afacial authentication system in the variable features’ domain.In this method, the extracted features are considered to bevariable and selected based on their quality and accessibility.Furthermore, the Euclidean geometry in 2-D computationalvector space is being constructed for features extraction. Af-terwards, algebraic shapes of the features are computed andcompared. The proposed method is being tested on imagesfrom two public databases: the “Put Face Database” andthe “Indian Face Database”. Performance is evaluated basedon the Correct Recognition (CRR) and Equal Error (EER)rates. The theoretical foundation of the proposed methodalong with the experimental results are also presented in thispaper. The results obtained in the experiment demonstratethe effectiveness of the proposed method.  Index Terms —CRR, EER, Euclidean geometry, and facialbiometric. I.  Introduction The rapid evolution of information technology hascaused the traditional token-based authentication and se-curity management system to no longer be sophisticatedenough to handle the challenges of the 21 st century. Asa result, biometrics has emerged as the most reasonable,efficient, and ultimate solution to authenticate the legiti-macy of an individual [1-3]. Biometrics is an automatedmethod of authenticating an individual based on theirmeasurable physiological and behavioural characteristics.The common biometric traits in this characterization pro-cess are fingerprint, face, iris, hand geometry, gait, voice,signature, and keystrokes [1],[2]. Fingerprint, face, and iristraits are widely used in the field of biometric technology.Government and law enforcement organizations includingmilitary, civil aviation, and secret service often need totrack and authenticate dynamic targets under surveillance.Organizations are also required to ensure that an individualin a room or crowd is the same person who had entered it.As a result, a step in the direction of facial biometricsis regarded as a conclusive solution in this area. Thistechnology makes it possible to facilitate the extraction of unique and undeniable physiological and behavioural char-acteristics without having the target ′ s (subject) intrusion orknowledge [1-4].There are many different methodologies that have beenstudied for biometric authentication systems, includingshape of the facial features, skin color, and appearance.Among them, the feature-based method is the most effi-cient due to its measurability, universality, uniqueness, andaccuracy. This approach is becoming the foundation of anextensive array of highly secure identification and personalverification solutions. The most commonly used facial fea-tures are the nose, eyes, lips, chin, eyebrows, and ears [5].The system’s performance and robustness are largely de-pendent on the features localization and extraction process.This process can be defined as the selecting of the relevantand useful information that uniquely identifies a subject of interest. The overall processing of the system must also becomputationally efficient. However, the human face is a dy-namic object with a high degree of variability in its positionorientation and expression. Noncooperative behaviour of the user and environmental factors including illuminationeffects also play an unfavourable role in the facial featureextraction process. These effects contaminate the extractedfeatures. Consequently, accessibility to the same biometricfeatures with the expected quality is obstructed because of these unavoidable challenges. Therefore, a vital issue infacial biometrics is the development of an efficient algo-rithm for a biometric authentication in order to overcomethe aforementioned challenges [1-7].This paper addresses the predominant deficiency of facial biometric. Afterward, it systematically investigatesthe facial biometric systems under the assumption thatfacial geometry is influenced by position orientation, facialexpression, and illumination effects. This method addressesthe two challenging issues of the facial biometric, qualityand accessibility. In the proposed method, a new facialauthentication algorithm is being developed to address (IJCSIS) International Journal of Computer Science and Information Security, Vol. 13, No. 6, June 20151http://sites.google.com/site/ijcsis/ ISSN 1947-5500  these issues. Furthermore, in this method, feature selection,extraction, and authentication systems have been processedin 2-D geometrical space. Each candidate facial feature isconsidered to be a collection of geometrical coordinates inthe Euclidean domain. The Euclidean distance between thecandidate feature coordinates is estimated and stored as avector to create the biometric template. It is then comparedto the stored template to authenticate the legitimacy of thesubject of interest.The motivation of this method is its ability to select bio-metric features based on their quality and accessibility, thenextract them to create the biometric template. Importantly,the variabilities of feature selection and extraction are pro-cessed without sacrificing efficiency in terms of computingtime and memory usage. For the experimental evaluationof the proposed method, facial images are used from twopublic databases: the “Put Face Database” and the “IndianFace Database”. The performance of the proposed methodis evaluated based on Correct Recognition (CRR), FalseAcceptance (FAR), and False Rejection (FRR) rates. AnEqual Error Rate (EER) of 3.49% and CRR of 90.68% havebeen achieved by the proposed method. The experimentalresults demonstrate the superiority of the proposed methodin comparison to its counterparts.The remainder of the paper is organized as follows:Section  II   presents the literature review related to theproposed method; the theoretical background is presentedin Section  III  ; Section  IV    represents the detailed analysisand algorithmic formulation of the proposed variabilitymethod; the results and analysis are presented in Section V    ; and discussions and conclusions are included in Section V I  .II.  Literature Review The effects of position orientation, facial expression, andillumination on facial features are the vital issues of bio-metric authentication. Several studies have been conductedto address these issues. S. Du et al. [8] presented a reviewof facial authentication methods and their associated chal-lenges based on pose variations. Their methodologies werebased on invariant features extraction in the multi-viewedand 3D range domain under different pose variations.However, the authors inadequately addressed the issue of variability due to the combined effects of facial orientation,expression, and illumination. One study conducted by theNational Science and Technology Council [9] proposeda Linear Discriminant Analysis (LDA) method for facialauthentication.The author used LDA to maximize the inter-class and minimize the intra-class variations, since PCAperformance deteriorates if a full frontal face can’t be pre-sented. Unfortunately, this model was designed for linearand homogeneous systems and faces challenges workingwith the underlying assumptions if there are an inadequatenumber of data samples in the received dataset. L. Chan etal. [10] proposed a linear facial biometric authenticationsystem using PCA in conjunction with LDA. In thatapproach, the author used PCA for dimension reduction,while LDA was used to improve the discriminant abilityof the PCA system. The main challenge with this methodis that it is inadequate to deal with the combined effects of position orientation, facial expression, and illumination. E.Vezzetti et al. [11] presented a geometric approach to showthe intra-class similarity and extra-class variation betweendifferent faces. This was an interesting study; however, itsmain objective was to formalize some facial geometricalnotations, which can be used to analyze the behaviourof faces, hence the authentication system. B. Hwang [12]et al. constructed a facial database with different positionorientations, facial expressions, and illuminations. Here theauthors used PCA (Principal Component Analysis), Corre-lation Matching (CM), and Local Feature Analysis (LFA)algorithms to evaluate the performance and limitations of the facial authentication systems. However, they did notconsider the variability in their feature selection method.F. Sayeed et al. [13] presented a facial authentication usingthe segmental Euclidean distance method. They used avariant of the AdaBoost algorithm for feature selectionand trained the classifier to enhance the performance of the facial detection process. Afterwards, each face wassegmented into nose, chin, eyes, mouth, and foreheadas a separate image; then the Eigenface, discrete cosinetransform, and fuzzy features of each segmented imagewere estimated. Finally, segmental Euclidean distance andSupport Vector Machine (SVM) classifiers were used in theauthentication process. Variability due to different facialposes has been considered in this method, however, itis inadequate to address the issues associated with thecombined effects of facial expression and illumination.J. Li et al. [14] proposed a facial authentication sys-tem using adaptive image Euclidean distance. In thisadaptive method, both spatial and gray level informationwere used to establish the relationship between pixels.Furthermore, two gray levels–namely, distance and co-sine dissimilarity–were considered between pixels. Theauthors claimed that their proposed method achieved apromising authentication accuracy using adaptive imageEuclidean distance in conjunction with PCA and SVM.But, the authors did not adequately discuss the challengesencountered due to position orientation, facial expression,and illumination effects that need to be overcome withoutsacrificing efficiency and processing time. J. Kalita et al.[15] proposed an eigenvector features extraction methodin conjunction with the estimation of minimum Euclideandistance method to authenticate the facial image. This isa very interesting and straightforward approach and theauthors considered the challenges associated with facialexpression. More importantly, this method would be ableto detect the resultant facial expression of the input image.Unfortunately, the combined effects of expression, orien-tation, and illumination were not sufficiently addressedin this method. C. Pornpanomchai et al. [16] proposed (IJCSIS) International Journal of Computer Science and Information Security, Vol. 13, No. 6, June 20152http://sites.google.com/site/ijcsis/ ISSN 1947-5500  a human face authentication method using the Euclideandistance estimation process along with the neural network.In this method, a Correct Recognition Rate (CRR) of   96% at a cost of 3.304 sec (per image) processing time hasbeen achieved. However, this method also did not addresspossible contamination from facial expression, orientation,and illumination effects. H. Lu et al. [17], presented anew PCA algorithm in an uncorrelated multilinear PCAdomain using unsupervised subspace learning of tenso-rial data. This system offered a methodology to maxi-mize the extraction of uncorrelated multilinear biometriccharacteristics. But it is an iterative process and is notsophisticated enough to deal with the combined effectsof position orientation, facial expression, and illuminationwithout compromising the computation complexity. Thechallenges associated with accessing the same biometricfeatures weren’t also addressed properly in that method. ABayesian Estimator was conducted by M. Nounou et al.[18], addressing the problem associated with the MLE andPCA algorithms. Unfortunately, this method was developedunder the assumption that the system is not vulnerableto the combined effects of illumination, expression, andposition orientation. J. Suo et at. [19] developed a gendertransformation algorithm based on hierarchy fusion strat-egy. In that approach the authors used a stochastic graphicalmodel to transform the attributes of a high-resolution facialimage into an image of the opposite gender with the sameage and race image. The main objective is to modifygender attributes while retaining facial identity. This is aninteresting model, however the authors did not consider thechallenges of accessing the same biometric features, dueto the associated heterogeneous nature. L. Lin et al. [20]proposed a hierarchical regenerative model using an ”And-Or Graph” stochastic graph grammar methodology. In thatmodel, a probabilistic bottom-up formulation was used forobject detection, and a recursive top-down algorithm wasused in the verification and searching process. Here, objectswith larger intra-variance were broken into their constituentparts, and linking between the parts was modeled bythe stochastic graph grammar technique. The authors alsoaddressed the localization challenges due to the backgroundclutter effect. But, the proposed verification process wasdeveloped in a homogeneous and controlled environment.In this method, the authors inadequately presented thechallenges associated with the accession and extraction of the same features.Therefore, in most cases, the biometric features usedin the authentication process are fixed. Consideration of variability during the feature selection and extraction pro-cess is necessary, since accessibility of the same biometricfeatures may be difficult due to facial expression, posi-tion orientation, and illumination effects. In this paper,a new biometric authentication method is presented thataddresses these effects and their impacts on accessibilityand quality. Variability is being considered in this processto overcome the accessibility issue. Sequential SubspaceEstimation [SSE] method studied in [21] has been used toensure the quality of the extracted features. Furthermore,Euclidean geometry in 2-D computational vector space isbeing constructed for biometric features extraction [22].Afterwards, the algebraic shape of the facial area, as wellas the relative positions and size of the eyes, nose, andlips, have been estimated in order to encode and create thebiometric templates. This encoded template is then storedin the biometrics database in order to be compared with thelive input encoded biometrics in Euclidean vector space.III.  Theoretical Background Unlike other facial authentication methods, the proposedmethod is developed in the Euclidean domain under the as-sumption that the quality and accessibility of the extractedbiometrics face challenges due to position orientation,facial expression, and illumination effects. Therefore, thissection presents a theoretical background before gettinginto a detailed analysis of the proposed method.  A.  Euclidean Vector The Euclidean vector measurement is a widely usedmethod for representing points in geometrical space. Inthis case, both a vector and a point (scalar quantity) in  n -D space can be represented by a collection of   n  values.But the difference between a vector and a point lies inthe way the geometrical coordinates are interpreted. Apoint might be considered as a scalar way of visualizing avector. The transformation between a vector and a pointin the 2-D geometrical coordinate system is shown inFig  1( a ) . A Euclidean vector can be represented by aline segment with a definite magnitude and direction. Thealgebraic manipulation process of the Euclidean vector in2-D geometrical space is shown in Fig.  1( b ) . In fact, allpoints in the Cartesian coordinate system can be definedin Euclidean vector space where a geometrical quantityis expressed as tuples splitting the entire quantity intoits orthogonal-axis components. These points are scalarquantities that can also be used to estimate the algebraicrelationship among the objects (images).Now, consider if   n -tuple points in  n -space can be rep-resented by  R n , then two vectors,  u  =  u 1 ,u 2 ,u 3 ,.....,u n and  v  =  v 1 ,v 2 ,v 3 ,.....v n , shown in Fig  1( b )  are equalif   u 1  =  v 1 ,u 2  =  v 2 ,u 3  =  v 3 ,.....u n  =  v n . Their other (IJCSIS) International Journal of Computer Science and Information Security, Vol. 13, No. 6, June 20153http://sites.google.com/site/ijcsis/ ISSN 1947-5500  properties can be presented as follows [23],[24]: u  +  v  =  u 1  +  v 1 ,u 2  +  v 2 ,u 3  +  v 3 ....u n  +  v n k ( u  +  v ) =  k u  +  k v The distance between two points  u  and  v : v − u  = ( v 1 − u 1 ,v 2 − u 2 ,v 3 − u 3 ,.....,v n − u n ) || u − v || =   ( u − v ) . u − vd ( u , v ) = || ( u − v ) ||   n  i =1 ( v i − u i ) 2 =   ( v 1 − u 1 ) 2 + ( v 2 − u 2 ) 2 +   ( v 3 − u 3 ) 2 +  .....  + ( v n − u n ) 2 The magnitude: || u || = √  u . u  =    p 21  +  p 22  +  p 23  +  .....  +  p 2 n where k is a scalar quantity.The geometrical representation of   u  and  v  in  R n isshown in Fig.  1 .In the proposed method, using the same analogy, aEuclidean vector in 2-D geometrical space is being con-structed for a feature extraction, estimation, and authenti-cation process. In particular, each assigned point of thecandidates’ biometric features is considered to be a 2-D geometrical coordinate in the Euclidean vector space[22]. This feature extraction, estimation, and authenticationprocess are presented in Section IV-B.  B.  Facial Anatomy Facial authentication is an everyday task, as humanscan identify faces without extra effort. Typically, the facehas inherent characteristics with distinguishable landmarks,different peaks, and approximately 80 nodal points [25].Building an automated system to authenticate an individualusing facial geometry can be done by extracting facialbiometric features; including size or shape of the eyes, lips,nose, cheekbone, and jaw, as well as their relative distances(or positions) and orientation. Authentication typically usesan algorithm that compares input data with the biometricsstored in the database. The authentication process basedon facial features is fast and accurate under favorableconstraints, and as a result this technology is evolvingrapidly. Unlike biometric authentication using other traits,authentication using facial biometrics can be done easilyin public or in noncooperative environments. In this case,the subject’s awareness is not required. A typical facialbiometric pattern in 2-D geometrical space is shown inFig.  2  [26],[27]. Face Databases In this method facial images from the two publicdatabases, the “Put Face Database” and the “IndianFace Database”, are used [29],[30]. The sizes of thetwo databases are presented in Table  I  . The “Put FaceDatabase” is a highly nonlinear and heterogeneous 3Dfacial database. It contains approximately 20 images perperson with a total of 200 people, and stores  2048 × 1536 pixel images [30]. The main motivation for using the“Put Face Database” is that the diversity of the imagesubsets allows them to be easily used for training, testing,and cross-validation processes. This can occur because theimages in this database have more than 20 orientationsfor an individual using various lightings, backgrounds, andfacial expressions. In addition, the images in this databasecontain 2193 landmarked images [31]. A sample of thefacial images from the “Put Face Database” is shown inFig.  3 .On the other hand, images in the “Indian Face Database”are less influenced by the facial expression, position ori-entation, and illumination effects. There are 40 subjects,each having 11 images with the same homogeneous back-ground. The size of each image is 640 ×  480 and 256gray level per pixel. The main reason for using two typesof databases is to find out the combined effects of twodifferent environments. As well, it is important to show thatthe proposed method is the optimal solution for not onlythe images highly influenced by the underlying challenges,but also for the images that are less obstructed by the samereason. A sample of the facial images from the “Indian FaceDatabase” is given in Fig.  4 .TABLE I: The Details of Two Databases Databases  Original Image Size (Pixels) ModifiedPut Face 2048x1536 (color) 256x256 (gray)Indian Face 640x480 (gray) 256x256 (gray) IV.  Variability Modeling Method The studies of many facial biometric authenticationmethods have been based on the geometrical featuresextraction and selection process. As previously mentioned,most of those algorithms have been developed under theassumption that the extracted candidate features for theauthentication process are fixed. However, there are chal-lenges in accessing the same facial geometric features,caused by effects due to facial orientation in the timedomain. In addition, even if the facial features are ac-cessible, their quality is contaminated by expression andillumination, due to the dynamic properties of the humanface and environmental factors, respectively. Some studieshave also been conducted based on variabilities in thefeatures extraction and selection process; but that methoddidn’t consider the combined effects of facial expression,orientation and illumination. As well, in most cases, thesevariabilities were introduced at the cost of processing time,storage, and memory. The proposed authentication methodis developed under the assumption that the extracted facialbiometrics are vulnerable to position orientation, facial (IJCSIS) International Journal of Computer Science and Information Security, Vol. 13, No. 6, June 20154http://sites.google.com/site/ijcsis/ ISSN 1947-5500  Fig. 1: Euclidean Vector in 2-D Geometry.Fig. 2: Features in 2-D Geometrical Space [26],[27].Fig. 3: A Sample Facial Images - Put Face Database. (IJCSIS) International Journal of Computer Science and Information Security, Vol. 13, No. 6, June 20155http://sites.google.com/site/ijcsis/ ISSN 1947-5500
Search
Tags
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks