Pets & Animals

A robust feature extraction algorithm based on class-Modular Image Principal Component Analysis for face verification

Face verification systems reach good performance on ideal environmental conditions. Conversely, they are very sensitive to non-controlled environments. This work proposes the class-Modular Image Principal Component Analysis (cMIMPCA) algorithm for
of 4
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
  AROBUSTFEATUREEXTRACTIONALGORITHMBASEDONCLASS-MODULARIMAGEPRINCIPALCOMPONENTANALYSISFORFACEVERIFICATION  Jos´ e Francisco Pereira Rafael M. Barreto George D. C. Cavalcanti Tsang Ing Ren Center of Informatics - Federal University of Pernambuco - Brazil  ∼ viisar {  jfp,rmb3,gdcc,tir } ABSTRACT Face verification systems reach good performance on idealenvironmental conditions. Conversely, they are very sensi-tive to non-controlled environments. This work proposes theclass-Modular Image Principal Component Analysis (cMIM-PCA) algorithm for face verification. It extracts local andglobal information of the user faces aiming to reduce theeffects caused by illumination, facial expression and headpose changes. Experimental results performed over threewell-known face databases showed that cMIMPCA obtainspromising results for the face verification task.  Index Terms —  Face verification, Principal ComponentAnalysis (PCA); class-Modular Image PCA (cMIMPCA) 1. INTRODUCTION There is a growing demand for automatic personal identityverification systems. Border control, access to controlled andclassified environments, access to private information, likebank account and classified documents, are some applicationswhich require an automatic system to verify the authenticityof the user. Based on this scenario, biometric identificationhas been highlighted recently as a promising method for au-tomatic people classification.Face identification is a natural way of people recog-nize themselves. It has some desired characteristics, suchas: friendliness, directness, convenience, non-contact andconcealment. Based on these facts, face can potentially beapplicable in a large range of situations and it is one of themost promising areas in automatic biometric classification.The first step in an automatic face verification system isthe face detection which aims to determine the locations andsizes of the human faces in images. After that, the feature ex-traction phase aims to find a good set of features to representeach face. This phase must deal with some problems, suchas: illumination, head pose and facial expression. Besides,we desire an efficient feature extraction algorithm (time com-plexity) which generates a small set of features (space com-plexity).Principal Component Analysis (PCA) [1] is widely usedasafacefeatureextractor. ManyothermethodsbasedonPCAhave been developed in order to reduce its computational costand to improve its accuracy. These methods improve featureextraction quality by changing the image representation andby using local information to improve the final image clas-sification. While PCA has a holistic approach that treats theface as a whole, the Modular approach (MPCA) [2] consid-ers the influence of different regions of the face. The Two-Dimensional approach (IMPCA or 2DPCA) [3] introducesan efficient method to reduce the feature extraction compu-tational cost while, at the same time, increases the represen-tativeness of the final set of features. Finally, the Modu-lar Image Principal Component Analysis (MIMPCA) [4] ap-proach combines these two previous techniques (MPCA andIMPCA). Thus, it takes advantage of local feature extractionand, besides, it generates a better space representation for thefaces.MPCA,IMPCAandMIMPCAweredevelopedtothefacerecognition problem which is an 1:N problem. They use thewhole dataset (with patterns from all the classes) to computethe basis vectors of the new face representation space. How-ever, this is not a good practice for face verification. Faceverification is an 1:1 problem in which we want to verify if the person is who he or she claims to be. Thus, we requirespecific user information instead of universal one.The objective of this paper is two-fold: i) develop a fea-ture extraction procedure for face verification which takesinto account only class-specific information. This algorithmis called class-Modular Image Principal Component Analy-sis (cMIMPCA); ii) evaluate the proposed approach againstclassical, modular and two-dimensional PCA techniques overthree well-known face databases.The paper is organized as follows. Section 2 shows theclass-Modular Image Principal Component Analysis (cMIM-PCA) algorithm. Section 3 details the experimental setup, thedatabases and the results. Section 4 exposes some final re-marks. 1469978-1-4577-0539-7/11/$26.00 ©2011 IEEEICASSP 2011  2. CLASS-MODULAR IMAGE PRINCIPALCOMPONENT ANALYSIS (cMIMPCA) The PCA-based face recognition method is not very efficientunder conditions of varying pose and illumination, since itconsiders global information of the faces. Under these condi-tions, the vector which represents the image varies consider-ably when compared with an ideal face image (fixed pose andconstant illumination). Thus, the accuracy of the technique issignificantly affected by these changes. Fig.1 . Graphical representation of image subdivision in mod-ular approaches.Variation on facial expression and illumination generallyaffects only some regions of the faces. Meanwhile, other re-gions remain unaltered. From this point of view, we expectthat the modular approach reaches better recognition perfor-mance than the standard PCA. Some examples of the imagedivisionintoequallysizedsub-imagescanbeseeninFigure1.Another important attribute of the feature extraction pro-cess is the quality or representativeness of the data extractedfrom the image. In standard PCA-based methods, the srci-nal image must be transformed into an one-dimensional im-age vector which usually leads to a very high-dimensionalspace. Thus, it is difficult to evaluate the covariance matrixaccurately due to its large size and relatively small number of training samples.In order to alleviate these two problems, the modular(MPCA) and the image (IMPCA) techniques are combinedin a class-dependent perspective. The class-Modular ImagePrincipal Component Analysis (cMIMPCA) technique can bedivided in two main phases: enrollment and test. 2.1. Enrollment A class is represented by a training set with  n  images and it isdefined as  I   =  { I  1 , I  2 ,..., I  n } . Each image is divided into a  pieces horizontally and  b  pieces vertically. Therefore, thesrcinal image is divided into  m  sub-image where  m  =  a × b and the size of each sub-image is equal to  ( kxl ) /m  pixels.These sub-images can be represented mathematically as I  i a  b  ( x,y ) =  I  i  ka ( a  −  1) +  x, lb ( b  −  1) +  y   (1)where  i  varies from 1 to  a  and  j  varies from  1  to  b , thus I  i a  b  representsthesub-imagesofcoordinates a  , b  ofthei th imagein the training set.An average sub-image  ¯ X   is computed for each class as: ¯ X   = 1( a  ·  b ) a  a  =1 b  b  =1 I  i a  b   (2)The next step is to normalize all sub-images by subtract-ing them by the mean: Y   i ab  =  I  i ab  −  ¯ X   (3)where  Y   i ab  represents the normalized region vector with  ab coordinates of the i th  image in training set.The class covariance matrix S   can be computed as: S   = 1( a  ·  b ) a  a  =1 b  b  =1 ( I  i ab  −  ¯ X  )  ·  ( I  i ab  −  ¯ X  ) T  (4)The next step is to calculate the first  v  eigenvectors as-sociated with the largest eigenvalues obtained from the co-variance matrix  S  . These eigenvectors are represented by P   = [ e T  1  ,e T  2  ,...,e T v  ] .After that, each sub-image is projected as define in Equa-tion 5. Z  i ab  =  P   ·  ( I  i ab  −  ¯ X  )  (5) 2.2. Test This phase is responsible to verify if an unlabeled image  I  u belongs to the same class of the image set  I  r . The first stepis to calculate the sub-images of   I  u  using Equation 1. Af-ter that, each sub-image  I  i ab  is projected into the new space(Equation 6). Z  u ab  =  P   ·  ( I  u ab  −  ¯ X  )  (6)The distance between the two images is defined by Equa-tion 7. It represents the average distance per sub-image. 1470  d ( I  u , I  r ) =   l  i =1 v  j =1 M  ij  (7) M   = a  a  =1 b  b  =1 ( Z  r a  b   − Z  u a  b  ) 2 (8)where  I  r  is a reference class image and  I  u  is the test im-age. This distance is computed for the test image against allpatterns in the training set of a specific class. If the smallestdistance is smaller than a predefined threshold, the test image I  u  is said to have the same class of   I  r . Otherwise, it is animpostor. 2.3. Remarks PCA [1], MPCA [2], IMPCA [3] and MIMPCA [4] computea global mean and a global covariance matrix. Thus, the src-inal eigenspace is an “optimal” representation of the wholetraining set, including all classes. This approach may not ac-curately capture or describe class-specific information whichis very important to assure that one face belongs to a particu-lar user.Based on that, class-Modular Image Principal ComponentAnalysis (cMIMPCA) extracts class-dependent information.It is expected an improvement in the verification system ac-curacy due to its class-dependent feature extractor when com-pared with the traditional approaches. One drawback of thisapproach is the reduced number of samples used to extract thefeatures. This influences the performance and the accuracy of thetechniqueswhichrequirethecalculationofrelativelylargecovariance matrices. However, cMIMPCA is not affected bythis problem because it does not transform each sub-image ina vector, like MPCA does. 3. EXPERIMENTAL STUDY The experiments were conducted over three well-known face databases: Yale [5], ORL [6] and UMIST [7]. The ORLdatabase was used to evaluate the performance of the tech-niques under conditions where the pose and the sample sizevary. The Yale database was used to test the performancewhen facial expressions and illumination change. Finally, theUMIST face database was used to evaluate the performanceover large change on face positions. All images in the Yaleface database were cropped and normalized to  92 × 112  pix-els.In order to compare the results under equal conditionsof illumination, head pose and facial expression, the exper-iments were performed using the same training and test sets.Each class training set is formed by five images. The evalua-tion of each class is done based on two test sets: one genuine(formed by five images of the same class) and one impostor(formed by the whole set without the images that belong tothe class under test). This large number of impostor imagescontributes to a more precise estimation of the false positiverate. Each experiment was performed ten times using differ-ent training and test sets. The mean and the standard devi-ation were collected for each technique and database. Afterpreliminary experiments, the best configuration was the onein which the images were divided into four regions (two ver-tical and two horizontal) of the same size; like shown in thesecond column of Figure 1. This configuration is adopted inall the modular approaches in the next section. 3.1. Results The Receiver Operator Characteristic (ROC) curve [8] con-sists of a two dimensional graphic where the  x -axis repre-sents the false positive rate and the  y -axis represents the truepositive rate. The Area Under the ROC curve (AUC) is ascalar that can be used to measure the performance of the sys-tem under evaluation. Thus, Table 1 shows the best AUCsper feature extraction technique and database. For the ORLdatabase, the PCA, IMPCA, MIMPCA and cMIMPCA tech-niques obtained similar results. This database presents smallhead pose and illumination variations.The results over the Yale face database show that thecMIMPCA technique has a clear advantage when comparedwith the other ones. This behavior can be explained consid-ering the peculiarities of this database. The Yale databaseexplores characteristics like illumination changes, facial ex-pressions, use of objects over faces and other environmentalvariations. In this case, modular approaches can improve faceverification performance by exploring the unchanged regionsof the srcinal images. Analyzing the IMPCA performance,it is possible to see a strong relation between its accuracy rateand the number of training samples. Reducing the numberof training samples, the accuracy rate of the 2D approach isstrongly affected. ORL YALE UMISTPCA  0 . 9821 0 . 9159 0 . 9713 MPCA  0 . 4971 0 . 5129 0 . 5050 IMPCA  0.9898   0 . 9043 0 . 9759 MIMPCA  0 . 9894 0 . 9227 0 . 9815 cMIMPCA  0 . 9876  0.9607 0.9850 Table 1 . Best AUCs per feature extraction technique anddatabase.Observing the results over the UMIST face database ex-posed in Table 1, it is possible to see that the best result wasagain obtained by the proposed technique, and the MIMPCAobtained a very similar results.Other important information is the relationship betweenthe correct recognition rate and false recognition rate. Thisrelationship is relevant in order to ascertain the robustness of  1471  ORL YALE UMISTPCA  57 . 80 49 . 50 65 . 56 IMPCA  72.40  55 . 80 47 . 67 MIMPCA  69 . 60 61 . 00 59 . 11 cMIMPCA  72 . 37  69.70 73.11 Table 2 . True Positive rates when False Positive (FP) is equalto 1%. ORL YALE UMISTPCA  91 . 05 82 . 60 77 . 33 IMPCA  95 . 95 83 . 90 68 . 33 MIMPCA  96 . 45 88 . 20 76 . 44 cMIMPCA  98.15 91.60 83.33 Table 3 . True Positive rates when False Positive (FP) is equalto 5%.the system under different conditions. This measure has agreat significance in the performance analysis because it isexpected to achieve high rates of correct recognition even un-der different environmental changes while at the same timethe system must reject face images that comes from an im-postor. Tables 2 and 3 show the true positive rates when thefalse positive rates are set to 1% and 5%, respectively. Ob-serving these tables, it is possible to see that the best trade-off between True Positive (TP) and False Positive (FP) wasobtained by the proposed approach: cMIMPCA. The MPCAtechnique was removed from this analysis due to its low AUCexposed in Table 1. ORL YALE UMISTPCA  580 410 470 MPCA  189 51 66 IMPCA  50  22 31 cMIMPCA  57  18 25 Table4 . Computational cost for the feature extraction and theface verification steps (in seconds).The total computational cost was also measured. The timerequired for extracting the face features and to classify all pat-terns using a ten-fold cross validation method is shown in Ta-ble 4. Notice that, except for the ORL database, the cMIM-PCA was the least expensive technique. 4. CONCLUSIONS It is herein proposed a PCA approach for face verificationentitled cMIMPCA which uses only class-dependent infor-mation in order to increase the final system accuracy. ThecMIMPCA obtains very satisfactory rates when compared toother analyzed techniques. Besides, it is less expensive interms of computational time; its total cost is, at least, half of the MPCA and the PCA.The cMIMPCA technique achieves interesting results be-cause it performs a local exploration of the face information(modular approach) in conjunction with an image representa-tion analysis (two-dimensional approach). Thus, the numberof elements in the training set is virtually increased while thecovariance matrix size is considerably reduced. Even underadverse conditions (illumination, head pose and facial expres-sion), the cMIMPCA contributes to improve the final systemaccuracy. Acknowledgment This work was supported in part by the FACEPE. 5. REFERENCES [1] M. Turk and A. Pentland, “Eigenfaces for recognition,”  Journal of cognitive neuroscience , vol. 3, no. 1, pp. 71–86, 1991.[2] R. Gottumukkal and V.K. Asari, “An improved facerecognition technique based on modular PCA approach,” Pattern Recognition Letters , vol. 25, no. 4, pp. 429–436,2004.[3] J. Yang et al., “Two-dimensional PCA: a new approachtoappearance-basedfacerepresentationandrecognition,”  IEEE Transactions on Pattern Analysis and  Machine In-telligence , vol. 26, no. 1, pp. 131–137, 2004.[4] J.F. Pereira, G.D.C. Cavalcanti, and T.I. Ren, “Modu-lar image principal component analysis for face recogni-tion,” in  International Joint  Conference on Neural Net-works , 2009, pp. 1969–1974.[5] A.S. Georghiadeset al., “From fewto many: Illuminationcone models for face recognition under variable lightingand pose,”  IEEE Transactions on Pattern Analysis and  Machine Intelligence , vol. 23, no. 6, pp. 643–660, 2001.[6] F. S. Samaria and A. C. Harter, “Parameterization of astochastic model for human face identification,” in  IEEE Workshop on Applications of Computer Vision , 1994, pp.138–142.[7] D.B. Graham and N.M. Allinson, “Characterizing vir-tual eigensignatures for general purpose face recognition.Face Recognition: From Theory to Applications, NATOASI Series F,”  Com puter and Systems Sciences , vol. 163,pp. 446–456, 1998.[8] T. Fawcett, “An introduction to ROC analysis,”  Pattern Recognition Letters , vol. 27, no. 8, pp. 861–874, 2006. 1472
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks