180 likes | 374 Views
Face Recognition from Face Motion Manifolds using Robust Kernel RAD. Ognjen Arandjelovi ć Roberto Cipolla. Funded by Toshiba Corp. and Trinity College, Cambridge. Eigenfaces. 3D Morphable Models. Wavelet methods. Face Recognition.
E N D
Face Recognition from Face Motion Manifolds using Robust Kernel RAD Ognjen Arandjelović Roberto Cipolla Funded by Toshiba Corp. and Trinity College, Cambridge
Eigenfaces 3D Morphable Models Wavelet methods Face Recognition • Single-shot recognition – a popular area of research since 1970s • Many methods have been developed • Bad performance in presence of: • Illumination variation • Pose variation • Facial expression • Occlusions (glasses, hair etc.)
Recognition setup Training stream Novel stream Face Recognition from Video • Face motion helps resolve ambiguities of single shot recognition – implicit 3D • Video information often available (surveillance, authentication etc.)
Facial features Face pattern manifold Face region Face Manifolds • Face patterns describe manifolds which are: • Highly nonlinear, and • Noisy, but • Smooth
? Limitations of Previous Work • In this work we address 3 fundamental questions: • How to model nonlinear manifolds of face motion • How to choose the distance measure • How and what noise sources to model
Information-theoretic measures Closest-neighbour Principal angles Majority vote + Eigen/Fisherfaces Mutual Subspace Method Our method, KLD method of Shakhnarovich et al. Comparing Nonlinear Manifolds
KLD: How well does P(x) explain Q(x)? P(x) Q(x) RAD: How well can we distinguish between P(x) and Q(x)? KLD vs. RAD vs. … Q(x) P(x)
Input space KPCA space Kernel PCA Highly nonlinear manifolds Approximately linear manifolds Nonlinear RAD RBF Kernel • Use closed form expression for KLD between Gaussians in KPCA space
Translation manifold Skew manifold Rotation manifold Registration • Linear operations on images are highly nonlinear in the pattern space • Translation/rotation and weak perspective can be easily corrected for directly from point correspondences • We use the locations of pupils and nostrils to robustly estimate the optimal affine registration parameters
Detect features Crop & affine register faces Registration Method Used • Feature localization based on the combination of shape and pattern matching (Fukui et al. 1998)
Feature Tracking Errors • We recognize two sources of registration noise: • Low-energy noise due to the imprecision feature detector • High-energy noise due to incorrectly localized features 20 automatically cropped and registered faces from a video sequence Outliers – high energy noise Imperfect alignment of facial features – low energy noise
Original data Original + synthetic data Low Energy Noise • Estimate misregistration manifold noise energy • Augment data with synthetically perturbed samples = thickening of the motion manifold • Synthetic data explicitly models the variation
Outliers Manifold of correctly registered faces (+low energy noise) Outliers – High Energy Noise • Outliers are due to incorrect feature localization • High energy noise – far from the ‘correct’ data mean in KPCA space
RANSAC for Robust KPCA Minimal, random sample Iteration Outliers Kernel PCA projection Valid data count
Algorithm: The Big Picture Input frames Original + synthetic data Valid data in KPCA space Distance
Face Video Database • No standard database exists – we collected our own data • 160 people, 10 different lighting conditions (each condition twice i.e. 20 video sequences per person)
Evaluation Results • Robust Kernel RAD outperformed other methods on all databases • Average recognition rate of 98%
Method Limitations / Future Work • Less pose sensitivity (why should input and reference distributions be similar?) • Illumination invariance is not addressed Same person, different illumination Novel person See Arandjelovićet al., BMVC 2004 For suggestions, questions etc. please contact me at:oa214@cam.ac.uk