350 likes | 486 Views
Iris detection using intensity and edge information. Pattern Recognition 36 (2003) 549-562. Speaker: Jing Ming Chiuan ( 井民全 ) Feb. 21 2005. Outline. Introduction Outline of the proposed algorithm Extraction of the face regions Extraction of valleys in the face region
E N D
Iris detection using intensity and edge information Pattern Recognition 36 (2003) 549-562 Speaker: Jing Ming Chiuan (井民全) Feb. 21 2005
Outline • Introduction • Outline of the proposed algorithm • Extraction of the face regions • Extraction of valleys in the face region • Detection of iris candidates • Selecting the irises of both eyes • Experimental results
Introduction • The face recognition has many applications • Personal identification, • Criminal investigation, • Security work, • Login authentication. • Face recognition takes among facial features, such eyes, and mouth
Introduction • The eyes can be considered salient and relatively stable features • The other feature can be estimated using the eye positions • Eye Detection • Eye position detection • Eye contour detection • Deformable template
Detect only eye patterns that are similar to sample eye image Introduction • Eye position detection • Template matching • Eigenspace • Separability filter Eigenspace training samples
Outline of the proposed algorithm Valley detection Face region detection Candidate location Candidate radius detection Iris selection
That is head rotation about the y-axis is (-30,+30) About the input image • The image is • an intensity image or a colour image • a head-shoulder image with plain background • The irises of both eyes appear in the image example
M N V(x) x Extraction of the face regions • For intensity images Step 1 Input Image Step 2 Sobel Edge operator Step 3 Calculate the left and right bound more
Left bound: The smallest x value which > V(X0)/3 Right bound: The largest x value which > V(X0)/3 Y min the smallest y that H(y) >=0.05(Xr-Xl) V(x) Xl Xr Y min x Y min+1.2(Xr-Xl) X0= the largest V(x) Y max Face Region Extraction
Extraction of the face regions Reference from J. Yang, A. Waibel, Real-time face tracker, Proceedings of the 3th IEEE Workshop on Applications of Computer Vision, 1996 • For color images • A skin-color model is used • the r-g color space Step 1 collect the skin region on the face images Step 2 Build the Gaussian distribution model
The probability density function The mean vector The covariance matrix The Gaussian Distribution Referece: http://mathworld.wolfram.com/NormalDistribution.html Non-skin skin Basic normal distribution
Extraction of the face regions-- for color image Step 1 Select pixels (x,y) whose color values v=(r,g) satisfy g(v) > ε Skin-color pixels Step 2 Apply a closing and an opening to the region of the skin-color pixels 5x5 9x9 Noise removing Step 3 Find the connected component with the largest area. Step 4 the boundary of the area is defined as the face region
neighborhood maximum operator Ref: http://www.astro.princeton.edu/~esirko/idl_html_help/D24.html#wp747869 If(V(x,y)>T) V(x,y) valley pixel Else V(x,y) non-valley pixel - Original Grayscale closing Extraction of valleys in the face region Valley detection Face region detection binary image Extract the valley image V(x,y)=G(x,y) - I(x,y), Gray scale closing
is the maximum of V(x,y) =0.1 is the # of pixels in the region such that V(x,y)= i is the # of pixels in the region • The threshold value was set to the largest value T satisfying Collect 10% of valleies h(T) h(T+1) h(T+n) h(MAX)
Reference from C.H. Lin, J.L. Wu, Automatic facial feature extraction by genetic Algorithm, IEEE Trans. Image Process, 1999 Select m pixels according to nonincreasing order of C(x,y) (m=20) Vc= 1 1 1 1 1 Vr(0)=2 Vr(1)=2 Vr(2)=1 Vr(3)=2 Detection of iris candidates Computers the valley costs Candidate location
Perfect match case P2 smaller B smaller η smaller P2larger B larger ηlarger example Reference from K. Fukui, O. Yamaguchi, Facial feature point extraction method based On combination of shape extraction and pattern match, Trans. IEICE Japan 1997 Measure the separability
Perfect match case η1 η2ηtηt+1ηu Find the optimal radius • Changing the size r in the range, we find the size r maximizingη(r) For AR face database rl =10, ru=13 For Bern face database rl=5, ru=7
Bi亮度越低越好 => 小 越 balance 越好 => 小 The factor of separation balance 越 balance 越好 => 小 The cost of an iris • Let B iare the iris candidates • Compute the cost for B i The Hough transform vote
2r 2r r r Largest vote (a,b) The Hough Transform Edge point r (Xi,Yi) Edge point
Costs for pairs of iris candidates Bi Bj dij The width of the face region The pair of the eyes The cost of a pair of iris candidates Bi and Bj The cost of an iris Cross-correlation value
The template for AR face database Cross-correlation value Step 1 Matching the template Affine Transform Step 2 the correlation value The eye templates for Bern face database R(i,j)= Step 3 if R(i,j) <0.1 then R(i,j)=0.1
Experimental Results • Two databases are used • The face database of University of Bern • The AR face database
The face database of University of Bern Size: 512 x 342 with 150 faces without spectacle
95.3% (avg) The proposed algorithm is not sensitive to the variation of the template All 150 faces 120 faces without The looking-down faces
The execution time PentiumIII 700Mhz
all correlation factor No correlation factor
Comparison Excluding training set
The AR face database • Color images with size 768x576 without spectacles • Natural illumination condition with different expressions • 63 face images are used Success rate = 96.8%, false image=2
Histogram Equalization Remove the light spot The light spot Step 2 Replace by the smallest intensity value
Conclusion • We proposed a new algorithm to extract the irises of both eyes • A simple method is used to remove the light spot of the eye