220 likes | 384 Views
CVPR 2010. AAM based Face Tracking with Temporal Matching and Face Segmentation. Mingcai Zhou 1 、 Lin Liang 2 、 Jian Sun 2 、 Yangsheng Wang 1. 1 Institute of Automation Chinese Academy of Sciences, Beijing, China. 2 Microsoft Research Asia Beijing, China. Outline. AAM Introduction
E N D
CVPR 2010 AAM based Face Tracking with Temporal Matching and Face Segmentation Mingcai Zhou1、 Lin Liang2、 Jian Sun2、Yangsheng Wang1 1Institute of Automation Chinese Academy of Sciences, Beijing, China 2Microsoft Research Asia Beijing, China
Outline • AAM Introduction • Related Work • Method and Theory • Experiment
AAM Introduction • A statistical model of shape and grey-level appearance Shape model Appearance model
Shape Model Building :mean shape :shape bases ,shape parameters learn by PCA generate mean shape、 shape bases
Texture Model Building :mean appearance :appearance bases :appearance parameters 灰階值 W(x) Mean shape Shape-free patch
AAM Model Search • Find the optimal shape parameters and appearance parameters to minimize the difference between the warped-back appearance and synthesized appearance map every pixel x in the model coordinate to its corresponding image point
Problems- AAM tracker • Difficultly generalize to unseen images • Clutterd backgrounds
How to do? • A temporal matching constraint in AAM fitting -Enforce an inter-frame local appearance constraint between frames • Introduce color-based face segmentation as a soft constraint
Related Work -feature-based (mismatched local feature) Integrating multiple visual cues for robust real-time 3d face tracking,W.-K. Liao, D. Fidaleo, and G. G. Medioni. 2007 -intensity-based (fast illumination changes) Improved face model fitting on video sequences, X. Liu, F. Wheeler, and P. Tu. 2007 temporal matching constraint
Method and Theory • Extend basic AAM to Multi-band AAM • The texture(appearance) is a concatenation of three texture band values • The intensity (b) • X-direction gradient strength (c) • Y-direction gradient strength (d)
Temporal Matching Constraint • Select feature points with salient local appearances at previous frame • I(t−1) to the Model coordinate and get the appearance A(t-1) • Use warping function W(x;pt) maps R(t-1) to a patch R(t) at frame t
Shape parameter Initialization Face Motion Direction ,
Shape parameter Initialization When r reaches the noise level expected in the correspondences, the algorithm stops
Shape parameter Initialization -Comparison Motion direction Previous frame’s shape Feature matching
Face Segmentation Constraint Where are the locations of the selected outline points in the model coordinate
Face Segmentation Constraint -Face Segmentation
Experiments Lost frame num
Conclusion ─ Our tracking algorithm accurately localizes the facial components, such as eyes, brows, noses and mouths, under illumination changes as well as large expression and pose variations. ─ Our tracking algorithm runs in real-time. On a Pentium-43.0G computer, the algorithm’s speed is about 50 fps for thevideo with 320 × 240 resolution
Future Work ─ Our tracker cannot robustly track profile views with large angles ─ The tracker’s ability to handle large occlusion also needs to be improved