320 likes | 337 Views
Face Recognition with Harr Transforms and SVMs. EE645 Final Project May 11, 2005 J Stautzenberger. Outline. Motivation Description of Face Recognition System Overview Feature Extraction Haar Transform SVM Experiments Structure Data Set Results Conclusions. Motivation.
E N D
Face Recognition with Harr Transforms and SVMs EE645 Final Project May 11, 2005 J Stautzenberger
Outline • Motivation • Description of Face Recognition System • Overview • Feature Extraction • Haar Transform • SVM • Experiments • Structure • Data Set • Results • Conclusions
Motivation • Very active field in CS right now. • Applications in Security, Multimedia Retrieval, Human Computer Interaction, … • Many “good” algorithms exist: • Correlation – Nearest Neighbor • Eigenfaces – PCA based • Fisher Faces • I am interested in doing real time face detection and feature extraction.
Proposed System Overview • Proposed a system Using cascaded SVM using Haar wavelet features with feature selection done by Adaboost. • Combination of simple to complex classifiers • Using SVMs for entire problem can lead to a lot useless computations on easily distinguishable background patters. • Adaboost feature selection will cut out almost all background patterns quickly. • “The aim of boosting is to improve the classification of any given simple learning algorithm.” (Schapire) • Training with Adaboost can be very slow if used for the complete classification algorithm though. – algorithm converges very slowly when examples become very hard.
FeatureExtraction • Two general types of feature selection: • Filter Methods - preprocessing steps performed independently of the classification algorithm • Wrapper Methods – search though feature space using criterion of the classification algorithm to select optimal features • 2 popular filter methods: • Haar Transform - simple • Gabor Transform – not simple
Haar Feature Extraction • Haar Wavelet • Breaks down image into 4 sub-samples • HH High passed in vertical and horizontal direction • LH Low passed in vertical and high passed in horizontal • HL High passed in vertical and low passed in horizontal • LL Low passed in vertical and horizontal directions LL LH LL LH LH HL HH HL HH HL HH
Rectangle Features • Rectangle Feature Examples • horizontal • vertical • diagonal • The sum of the pixels which lie in the white rectangles are subtracted from the sum pixels in the grey rectangles
SVMs • The SVM determines the optimal hyperplane which maximizes the margin. • The margin is the distance between the hyperplane and the nearest sample from the hyperplane. • Decision function: • α is the solutions from quadratic programming problem • Non-zero α is called a support vector
Images • Take some initial images for simple testing • Either create or Find a large Database • 2100 images of 2 people • Yale Database B • 5760 single light source images of 10 subjects • under 576 viewing conditions (9 poses, shown in Figure (4), x 64 illumination conditions, shown in Figure (5)).
Experiments • Initial Test • No Feature Selection • 100 64x64 training images • 3072 length feature vector • 2000 test images
Yale Image Experiment 1 • Two subjects • 512 Training Images • 512 Test Images • No feature selection • 3072 length feature vector • Linear SVM • Training • 18 support vectors • No misclassified images • Testing • All images classified correctly
Yale Image Experiment 2 • 10 test subjects • 1024 faces • 384 training faces • Only 2 subjects trained • 640 test faces • 10 subjects tested • Training • 38 support vectors • 0 misclassified • Testing • All faces positively classified… very bad
Yale Experiment 3 • Same setup as before but this time with feature extraction • 1 level Haar transform • 4 filtered images • 3072 length feature vector • No Feature Selection this time • Training • 20 support Vectors • None misclassified • Testing • classification error rate was 0.020 • All positive labels classified correctly
Yale Experiment 4 • Same setup as 3 but with 2 level Haar Transform • 2 level Haar transform • 7 filtered images • 3072 length feature vector • No Feature selection this time • Training • 36 support Vectors • None misclassified • Testing • classification error rate was 0.000 • Very good…
Yale Experiment 5 • Same setup as 3 and 4 but now with feature selection • Feature Selection Algorithm • 1 level Haar Transform • Sum 4 filtered images • 4 features • Training • Nonlinear Support Vector (RBF) • 372 support Vectors • 9 misclassified • Testing • classification error rate was 0.2391 • Positive examples badly mislabeled
Yale Experiment 6 • Same setup as 5 but now with 16 selected features • Feature Selection Algorithm • 4 level Haar Transform • Sum 16 filtered images • 16 features • Training • Back to Linear Support Vector Machine • 11 support Vectors • 0 misclassified • Testing • classification error rate was 0.1812 • Positive examples labeled correctly
Conclusions • Feature Extraction is simple but very powerful • Better feature selection for better error rate • Better rectangle filters • Use Boosting • Eliminate background patterns • Reduce features • Gabor Transform • Better Testing Needed • Test against known results • Crop Images better • Implement 3 layer system • Feature Extraction • Boosting (soft classier) • SVMs (hard classifier)
References • [1] "Georghiades, A.S. and Belhumeur, P.N. and Kriegman, D.J.", "From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose", "IEEE Trans. Pattern Anal. Mach. Intelligence", 2001, 23, 6, "643-660". • [2] Le, Duy Dinh, Satoh S., Feature Selection by AdaBoost for SVM-Based Face Detection. • [3] F. Smeraldi, O. Carmona, J. Bigün. Saccadic Search with Gabor features applied to Eye Detection and Real-Time Head Tracking (1998). • [4] P. Viola and M. Jones."Fast and Robust Classification using Asymmetric AdaBoost and a Detector Cascade"