180 likes | 421 Views
Facial Type, Expression, and Viseme Generation . Josh McCoy, James Skorupski, and Jerry Yee. Introduction. Virtual Human Faces Hard to generate Easy to criticize Motivation Movies Games Problems Hand-made models take time Physically-based models look weird. Contribution.
E N D
Facial Type, Expression, and Viseme Generation Josh McCoy, James Skorupski, and Jerry Yee
Introduction • Virtual Human Faces • Hard to generate • Easy to criticize • Motivation • Movies • Games • Problems • Hand-made models take time • Physically-based models look weird
Contribution • Data-driven facial face generation • User-guided categorization • Real-time pose generation from data
Related Work: Face Retargeting • V. Blanz, C. Basso, and T. Vetter • Reanimating Faces in Images and Video. • Use a morphable model to synthesize a 3D face of the 2D image. • Capture 35 scans of static face poses (expressions and visemes in neutral expression) from a source actor. • Find dense point-to-point correspondences • Retarget facial movements to the 3D face. • Render the 3D face back into the 2D image.
Related Work: Face Retargeting • Problems • Does not generate new expressions that are not in the source data set. • Does not combine and retarget expressions and visemes together.
Related Work: Bilinear Model • E. Chuang and C. Bregler • Mood Swings: Expressive Speech Animation • Capture a video of an actor reading script under three different expressions (happy, angry, neutral) • Create a bilinear model, factoring expressions and visemes into two separate components. • Synthesize new facial movements with any expression and viseme.
Related Work: Bilinear Model • Problems • Requires a full Cartesian product of facial expressions and visemes. • Does not generate new expressions that are not in the source data set. • Does not change the facial characteristics (identity). • Pres Videos\Jerry\moodswings.mov
Related Work: Multilinear Model • D. Vlasic, M. Brand, H. Pfister, & J. Popovic • Face Transfer with Multilinear Models • Capture videos of 16 actors, each performing 5 visemes under 5 different expressions. • Create a multilinear model, factoring expressions, visemes, and identity into three separate components. • Synthesize new facial movements with any expression, viseme, and identity
Related Work: Multilinear Model • Problems • Requires a full Cartesian product of facial expressions, visemes, and identity. • Limitations in the missing data imputation process. • Does not generate new expressions that are not in the source data set. • Pres Videos\Jerry\vlasic-2005-ftm-sing.mp4
Methods • Acquire and Categorize • Learn • Generate
Vertex Correspondence User “rates” attributes of each face Video Acquire and Categorize • Three data sets are needed to fill the model space • Set of many neutral faces • Set of one face in many poses • Set of Visemes with reference face
Expression deformation Type deformation Learn • Analyze each triangle and transform type separately Reference Face Viseme deformation
Low-dimensional subspace (PCA) polygons individuals Learn • Compare each pose to reference face • Principle Component Analysis (PCA) • Apply to each axis of variation • Analyze transformation of every face in mesh • Infer variation of single attribute from combination of many
Generate • Same sliders as categorization UI • Generate any combination of attributes • Runs in real-time
Conclusion • Realistic face poses from real-world basis data • Arbitrary faces from sparse data set • Future Work • Use high res data to drive low res morphing • Incorporate more biologically accurate face model