310 likes | 523 Views
Simple Face Detection system. Ali Arab Sharif university of tech. Fall 2012. outline. What is face detection? Applications Basic concepts Image RGB color space Normalized RGB HSL color space Algorithm description. What is face detection.
E N D
Simple Face Detection system Ali Arab Sharif university of tech. Fall 2012
outline • What is face detection? • Applications • Basic concepts • Image • RGB color space • Normalized RGB • HSL color space • Algorithm description
What is face detection • Given an image, tell whether there is any human face, if there is, where is it (or where they are).
Applications • automatic face recognition systems • Human Computer Interaction systems • surveillance systems • Face tracking systems • Autofocus cameras • Even energy conservation!!! • The system can recognize the face direction of the TV user. When the user is not looking at the screen, the TV brightness is lowered. When the face returns to the screen, the brightness is increased.
What is an image? • We can think of an image as a Matrix. • Simplest form : Binary images
What is an image? (cont.) Grayscale images :
What is an image? (cont.) • Color images : Known as RGB color space
rg space • Normalized RGB : • a color is represented by the proportion of red, green, and blue in the color, rather than by the intensity of each. • Removes the intensity information. r = R/(R+G+B)g = G/(R+G+B)
HSL color space • Motivation:the relationship between the constituent amounts of red, green, and blue light and the resulting color is unintuitive.
HSL color space • Each pixel is represented using Hue, saturation and lightness. • You need to know how to convert from RGB to HSL!
Algorithm description • We use a simple Knowledge-based algorithm to accomplish the task: • This approach represent a face using a set of rules , Use these rules to guide the search process.
Algorithm description If H<20 or H>=239 , Can be skin , otherwise reject it. • First step: skin pixel classification • Convert RGB to HSL. • In HSL color space: • The goal is to remove the maximum number of non-face pixels from the images in order to focus to the remaining skin-colored regions.
Algorithm description (cont.) Let : • First step: skin pixel classification • Convert RGB to rg space. • In rg chromaticity space:
Algorithm description (cont.) Result of skin classification:
Algorithm description (cont.) Consider each connected region as an object.
Algorithm description (cont.) Binary image before labelling: second step: connected components labelling.
Algorithm description (cont.) Binary image after labelling: second step: connected components labelling.
Algorithm description (cont.) You can find an efficient algorithm for labelling here : http://www.codeproject.com/Articles/336915/Connected-Component-Labeling-Algorithm second step: connected components labelling.
Algorithm description (cont.) • Third step: connected component analysis • Analysing the labelled image. • Giving us features of each object like: • Area • Minimum bounding box
Algorithm description (cont.) • Forth step: • objects smaller than the minimum face area are removed (smaller than 450 ) • Objects bigger than the maximum face area are removed (larger than 4500)
Algorithm description (cont.) • The resulted image until now:
Algorithm description (cont.) region is rejected. • Fifth step : percentage of skin in each bounding box • if precentage > 0.9 or • if percentage <0.4
Algorithm description (cont.) width height • sixth step: eliminating based on golden ratio • (height / width) ratio ≈ golden ratio (1.618)
Algorithm description (cont.) • And the last step: counting the holes. (optional) • For remaining objects we compute the number of holes. • Eyes , mouth and nose usually are darker, so they appear as holes in binary image. • If an object has no hole , we simply reject it!
Algorithm description (cont.) • And the last step: counting the holes. (optional) • How?? • In each bounding box invert the pixels and count the objects in new image using the labelling algorithm discussed before.
Algorithm description (cont.) Remaining objects are facial regions.
Final Result We can draw a bounding box for each face or just report the position.
Remarks You’re not allowed to use any image processing library like cx_image or openCV. Collaboration encouraged, but the work must be done individually.
Any Question? mail to: aliarab2009@gmail.com