Documents

A Novel Algorithm on Facial Color Images for Detection Eyes

Description
A Novel Algorithm on Facial Color Images for Detection Eyes Jalal A. Nasiri α* , Sara Khanchi β , H. Reza Pourreza α , A. Vahedian α Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran α Department of Computer Engineering, University of Yazd, Yazd, Iran β Communication and Computer Research Center* J.nasiri@wali.um.ac.ir, s.khanchi@yazduni.ac.ir, hpourreza@ferdowsi.um.ac.ir , vahedian@ferdowsi.um.ac.ir Abstract In face detection or
Categories
Published
of 6
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
  A Novel Algorithm on Facial Color Images for Detection Eyes Jalal A. Nasiri α * , Sara Khanchi β , H. Reza Pourreza α , A. Vahedian α    Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran α   Department of Computer Engineering, University of Yazd, Yazd, Iran  β   Communication and Computer Research Center* J.nasiri@wali.um.ac.ir  ,s.khanchi@yazduni.ac.ir, hpourreza@ferdowsi.um.ac.ir , vahedian@ferdowsi.um.ac.ir   Abstract  In face detection or recognition applications, amajor trend would be the eye detection. Its wide use as part of intensive applications, has made it achallenging task to work on which mainly depends oncolor characteristics as a useful way to detect eyes. Weuse special color space, YCbCr as its components yield valuable information about eyes in the image. We haveconstructed two maps according to its components and have merged them to obtain a final map. Candidateshave been generated on this final map. An extra phaseon candidates has been applied to determine suitableeye pair. The extra phase consists of flexiblethresholding and geometrical tests. Flexiblethresholding makes generating candidates more precise, while geometrical tests allow suitablecandidates to be selected as eyes. Simulation results onCVL and Iranian databases have shown that this extra phase has improved the correct detection rate by about 12% and reached 98% success rate on the average. Keywords:   Eye detection, Color images, Lightingcondition, Facial feature map. 1. Introduction Eye detection is a crucial step in many applicationssuch as face detection/recognition, face expressionanalysis, gaze estimation, criminal investigations,forensics, human interactions and surveillance systems[2, 3, 4]. Existing works in eye detection can beclassified into two major categories: traditional image-based passive approaches and the active IR basedapproaches. The former exploits intensity and shape of eyes for detection and the latter works on theassumption that eyes have a reflection under near IRillumination and produce bright/dark pupil effect.The traditional methods can be broadly classifiedinto three categories: template based methods [8, 9],appearance based methods [10, 11] and feature basedmethods [12, 13]. The approach presented in this work is considered to be of first category.Color is one of the useful features in eye detection.Thilak et al. [14] proposed an algorithm which detectseyes in three levels. They first localized eye candidatesby simple thresholding on HSV color space andnormalized RGB color space sequentially. It wasfollowed by connected component analysis todetermine spatially connected regions and reduce thesearch space to determine the eye pair windows.Ultimately, the mean and variance projection functionwas applied to validate the presence of eye in eachwindow. Lin et al. [15] proposed an algorithm usingHSI color space to extract skin color pixels and useregion growing algorithm to group these pixels. Thenby means of Face Circle Fitting (FCF) method, theydetect face region and thereafter apply Dark-pixelFilter (DFP) to identify eye candidates. At last, they usegeometric relation to find eye positions. Gargesha et al.[16] combine the techniques of chrominance andluminance and curvature analysis to compute eye maps.The exact position of eyes could then be determinedusing either PCA or Random transform.We construct Eye Maps and by combining them,determine eye candidates from the final Eye Map. EyeMap is obtained from a facial image that is transformed International Journal of Factory Automation, Robotics and Soft Computing, Issue 2, April 2008 ISSN 1828-6984126  into YCbCr color space [1]. The two highest peaks(brightest regions) in Eye Map are supposed to be eyes[6].Our simulation results showed that two highest peaksdon’t always correspond to eyes, i.e. when input imageis noisy or is under poor lighting conditions. An extraphase has been then designed to overcome thesesituations. In this phase, the bright regions that satisfysome special features are considered as eye pair.Experimental results showed this phase improveddetection rate saliently. 2. Algorithm of Eye Detection We have first built two separate eye maps from facialimage, EyeMapC from the chrominance componentsand EyeMapL from the luminance component. Thesetwo maps are then combined into a single eye map,EyeMap. The facial image should be frontal and notoccluded by objects like glasses, mask and so on. Botheyes also should be visible in input image. A headrotation of at most 30° around vertical axis and 10°around horizontal axis is acceptable.The flowchart of whole algorithm is shown in Fig.1. 2.1. EyeMapC The main idea of EyeMapC is based oncharacteristics of eyes in YCbCr color space whichdemonstrates that eye regions have high Cb and low Crvalues [6]. It is constructed by:      ++= )(2)(2)b(C31EyeMapC r C bC r C    Where (C b ) 2 , )( r C  2 , (C b  /C r ) all are normalized tothe range [0 1] and )( r C  2 is the negative of Cr (i.e., 1-Cr). This formula is designed to brighten pixels withhigh Cb and low Cr values. (C b ) 2 emphasizes pixelswith higher Cb value and causes pixels with lower Cbvalue become weaker, also (C b  /C r ) results in pixelswith low Cr become brighter. Finally (C b  /C r )component completes the proposed idea in this work that eye regions have high Cb and low Cr values.The 1/3 scaling factor has been applied to ensurethat the resultant EyeMapC stays within the range of [01]. Eventually, we perform histogram equalization toobtain final EyeMapC. The process of EyeMapCconstruction is depicted in Fig.2. Figure 1:   The Whole AlgorithmFigure 2: EyeMapC International Journal of Factory Automation, Robotics and Soft Computing, Issue 2, April 2008 ISSN 1828-6984127    2.2. EyeMapL Since eyes usually contain both dark and brightpixels in the luma component, grayscale morphologicaloperators (e.g., dilation and erosion) [5] can bedesigned to emphasize brighter and darker pixels in theluma component around eye regions. We have usedgrayscale dilation and erosion with a hemisphericstructuring element to construct eye map from the lumaas follows:y)g(x,y)Y(x, y)g(x,y)Y(x,  EyeMapL ⊗⊕=  Where the grayscale dilation ⊕ and erosion ⊗  operations on a function:  R RF  →⊂ 2  :f  using astructuring function  R RGg →⊂ 2  : are defined in[5]. 2.3. Eye Map After constructing EyeMapC and EyeMapL, theyhave been multiplied to obtain the final 'EyeMap', i.e.,EyeMap= (EyeMapC) AND (EyeMapL), as in Fig.3.The resulting eye map has been then dilated, masked,and normalized to brighten both the eyes and suppressother facial areas. The locations of the eye candidateswere estimated and refined using thresholding andbinary morphological erosion on the eye map. Figure 3: EyeMap 3. Improvement Phase As discussed above, two brightest regions in EyeMapare not always eyes [17]. We have then proposed anovel approach to ensure that two selected regions areeyes, not necessarily two brightest ones. Hence wechoose among eye candidates those two regions thatsatisfy our conditions. We utilized  flexible thresholding and  geometrical tests to design ourapproach which is described in the following.  Flexible Thresholding The minimum value each pixel must obtain to beconsidered as white (255) after thresholding isdetermined by a parameter called thresholding ratiowhich is different in images under various lightingconditions. Setting the thresholding ratio is thebottleneck of our suggested solution. Adjusting thethreshold ratio too high, results in eye regions not to beconsidered as eye candidates. In contrast, selecting thethreshold ratio too low causes firstly an increase in thenumber of eye candidates making eyes detectiondifficult and, secondly, regions combine with eachother to cause detection of exact position of eyesimpossible. We reached this flexibility by combiningiterative thresholding and geometrical tests which areexplained in improvement algorithm section. Geometrical Tests Eyes attain some unique features in faces. We haveextract these features and design special tests to verifywhich candidates are eyes. The geometrical tests arepresented in the following: ã    Eyes-Centre Distance Test : Distance of eyesfrom the centre of face has been calculated.Both the distance between eyes and the centreof face are almost the same and must notexceed each other by 30%. Our observationhas shown that the two distances are foundvery closely matched. ã    Eye Pair Distance Test : The distance betweenthe eye pair must be more than  Eyes-Centre Distance. ã    Eye Angle Test : According to the structure of face, two eyes cannot be located in one side of face. In this test we examine eye paircandidates to be in two side of face. ã    Eye Shape Test: As eye shape is circular, wetest candidates not to be so thin and long. Forthis purpose we compute  roundness ratio .The two selected eyes must be more than 0.7 in  roundness ratio . Thin and long candidatesmust be discarded in this test. International Journal of Factory Automation, Robotics and Soft Computing, Issue 2, April 2008 ISSN 1828-6984128    Improvement Algorithm The suggested algorithm is composed of iterativethresholding and geometrical tests. We set thresholdingratio based on maximum pixel value in the facialimage. Experimental results have demonstrated thatselecting too high thresholding ratio is not proper andsometimes is misleading to start with. We found outthat 0.7*MaxValue is the best point to start with.Geometrical tests have then been applied on eyecandidates obtained after thresholding. If two regionswere found that satisfy all the tests, they have beenconsidered as eyes and algorithm finishes, else thealgorithm restarts by lowering the threshold ratio.The next ratio is obtained by previous minus 0.1(ratio= previous ratio-0.1). This iterative task would becontinued till both eyes are detected by algorithm. TheFlowchart of improvement algorithm is shown in Fig.4.As our observations showed, when the light distributionis not unified over face, the number of iterationsincreases. Figure 4: Improvement Algorithm   4. Experimental Results This   section provides simulation results to evaluateour algorithm with and without improvement phase.We apply our algorithm on CVL [7] and Iraniandatabases. When the detection is found, a cross is usedto mark the eyes. Summary of the detection results(including the number of images, detection rates andaverage CPU time for processing an image) on theCVL and Iranian databases are presented in Table.3and Table.4, respectively. The detection rate iscomputed by the ratio of the number of correctdetection to that of all the images tested. Sample of detections on CVL and Iranian databases aredemonstrated in Fig.5 and Fig.6. CVL Database CVL database consists of head and shoulder imagestaken from 114 people in 7 kinds of expressions.Among 7 images taken from a person, 3 of them weresuitable for our purpose. These three photos are frontalview and with different expressions: serious, smile andgrin. Iranian Database Iranian database consists of head and shoulderimages taken from 50 people. Images in Iraniandatabase are taken under various lighting conditions . Table 1: Results on CVL database without improvement phase Expression Serious Smile Grin TotalNo. of image 110 110 110 330Data Rate (%) 90 86.36 81.81 86.06Time (sec):average 0.69 0.70 0.71 0.70 Table 2: Results on the Iranian database without improvement phase Gender Female Male TotalNo. of image 28 22 50Data Rate (%) 85.71 86.36 86Time (sec):average 1.21 1.54 1.375 Table 3: Results on CVL database with improvement phase Expression SeriousSmile Grin TotalNo. of image 110 110 110 330Data Rate (%) 100 98.18 97.27 98.48Time (sec):average 0.72 0.71 0.74 0.724 International Journal of Factory Automation, Robotics and Soft Computing, Issue 2, April 2008 ISSN 1828-6984129
Search
Tags
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks