940 likes | 1.33k Views
SIGGRAPH 2011 PAPER READING. EXAMPLE-BASED SIMULATION, GEOMETRY ACQUISITION, FACIAL ANIMATION. Tong Jing 2011-7-6. PAPERS:. Example-Based Simulation: Data-Driven Elastic Models for Cloth: Modeling and Measurement Geometry Acquisition:
E N D
SIGGRAPH 2011 PAPER READING EXAMPLE-BASED SIMULATION, GEOMETRY ACQUISITION, FACIAL ANIMATION Tong Jing 2011-7-6
PAPERS: • Example-Based Simulation: • Data-Driven Elastic Models for Cloth: Modeling and Measurement • Geometry Acquisition: • Global Registration of Dynamic Range Scans for Articulated Model Reconstruction • Facial Animation: • Realtime Performance-Based Facial Animation • Leveraging Motion Capture and 3D Scanning for High-Fidelity Facial Performance Acquisition • Interactive Region-based Linear 3D Face Models • High-Quality Passive Facial Performance Capture Using Anchor Frames • Computer-Suggested Facial Makeup. EG 2011 • Learning Skeletons for Shape and Pose. I3D 2010
DATA-DRIVEN SIMULATION Simulation Data Capture Improve Reconstruction Data Accuracy Example Data Simulation Improve Simulation Speed Estimate parameters form Data Simulation Improve Simulation Accuracy • Physically Guided Liquid Surface Modeling from Videos. Siggraph 2009 • Example-Based Wrinkle Synthesis for Clothing Animation. Siggraph 2010 • Data-Driven Elastic Models for Cloth: Modeling and Measurement . Siggraph 2011
SIMPLE CLOTH SIMULATION • Hook's Law: • Stress = k * Strain • Problems: • Most current cloth simulation techniques simply use linear and isotropic elastic models with manually selected stiffness parameters.
REALISTIC CLOTH SIMULATION • Anisotropic • Different angles, different stiffness • Nonlinear: • Different stretching degree, different stiffness
PIECEWISE ANISOTROPIC LINEAR ELASTIC MODEL Different stiffness for different angles and stretching degree
PIECEWISE ANISOTROPIC LINEAR ELASTIC MODEL stress stiffness tensor matrix strain
PIECEWISE ANISOTROPIC LINEAR ELASTIC MODEL Different stiffness for different angles and stretching degree
LIMITATIONS • Piecewise linear elastic model not good for large deformed cloth simulationn • Does not consider the cloth memory property • Only looked at measuring static parameters. Dynamic parameters, such as internal damping, should also be measured.
CONCLUSION • Data-driven models wide accepted in computer graphics, such as motion capture for animation, measured BRDFs for reflectance. • Explored the new domain of data-driven elastic models for cloth.
Previous Work Require small changes in pose / temporal coherence ICP-based approach of (Pekelny & Gotsman, EG 2008) User labels rigid parts in the first frame Each rigid part registered in subsequent frames Reconstructed Surface and Skeleton Registered 3D Scans (Images from Pekelny and Gotsman 2008)
Previous Work • Require small changes in pose / temporal coherence • Model a space-time surface (Mitra et al., SGP 2007) • Requires dense spatial and temporal sampling (Image from Mitra et al. 2007) Example: A 2D time-varying surface
Previous Work • Require user-placed feature markers • Example-based 3D scan completion (Pauly et al., SGP 2005) • Fill holes by warping similar shapes in a database (Images from Pauly et al. 2005)
Previous Work • Require knowledge of the entire shape (template) • Correlated Correspondence (Anguelov et al., NIPS 2004) • Goal is to match the corresponding point in the template • Cost function: matches features and preserves geodesic distance (Images from Anguelov et al. 2004) Partial Example Registered result Ground Truth Template Model
Will Chang’s two related work • Automatic Registration for Articulated Shapes. SGP 2008 • Pairwise • Assuming no knowledge of a template, user-placed markers, segmentation, or the skeletal structure of the shape • Motion Sampling. Idea from Partial Symmetry Detection (Mitra et al. ‘06)
Motion Sampling Illustration Find transformations that move parts of the source to parts of the target Source Shape Target Shape
Motion Sampling Illustration Find transformations that move parts of the source to parts of the target Sampled Points Source Shape Target Shape
Motion Sampling Illustration Find transformations that move parts of the source to parts of the target Source Shape Target Shape
Motion Sampling Illustration Find transformations that move parts of the source to parts of the target Translate Rotate and Translate Translate Rotate and Translate Rotations Translations Source Shape Target Shape Transformation Space
Motion Sampling Illustration Find transformations that move parts of the source to parts of the target s1t1 s1t2 Rotations s1 t1 Translations t2 s2 s2t2 s2t1 Source Shape Target Shape Transformation Space
Will Chang’s two related work • Range Scan Registration Using Reduced Deformable Models. EG 2009 • Pairwise • Not require user specified markers, a template, nor manual segmentation of the surface geometry • Represent the linear skinningweight functions on a regular grid localized to the surface geometry
GOAL • Automatically globallyreconstruct articulated models (mesh, joint, skin weight ) • simultaneously aligns partial surface data and recovers the motion model • Can deal with large motion and occlution • Without the use of markers, user-placed correspondences, segmentation, or template model
All frames align to the first frame • Maintain and update a global reduced graph data: dynamic sample graph (DSG)
PROBLEMS • Too many sub-procedures, too many parameters ( automatic ? ) • Too slow (100+ seconds per frame)
METHODS FOR ACQUIRING 3D FACIAL PERFORMANCES animation temporal resolution low high marker based motion capture systems (2000HZ, 200 markers) image-based systems geometry spatial resolution High-speed structured light systems (30HZ, smooth mesh) this work (2000HZ, million verts?) 3D laser scanning (static, million verts) high
Goal • acquiring high-fidelity 3D facial performances with realistic dynamic wrinkles (temporal) and fine scale facial details (spatial) • Ideas • Leverages motion capture and 3D scanning technology • Using blend shape model
OVERVIEW motion capture data acquisition (T frames,240 fps, 100 markers) select a minimum set of key frames (K frames, K<<T,100 markers) marker mesh registration (generating motion capture markers with every face scan) capture corresponding Face Scans (K frames, 80K Mesh) face scans registration (generating dense, consistent surface correspondences across all the scans) facial performance reconstruction by blend shape (T frames,240 fps, 80K Mesh)
motion capture data acquisition (T frames,240 fps, 100 markers) select a minimum set of key frames (K frames, K<<T,100 markers) • select a minimum set of facial expressions by minimizing the reconstruction blend shape errors … Key frames Blend shape coefficients for frame t All motion capture data
marker mesh registration (generating motion capture markers with every face scan) capture corresponding Face Scans (K frames, 80K Mesh) • differences between the “reference” expressions and the “performed” expressions • Solving by extennded ICP
marker mesh registration (generating motion capture markers with every face scan) face scans registration (generating dense, consistent surface correspondences across all the scans)
marker mesh registration (generating motion capture markers with every face scan) face scans registration (generating dense, consistent surface correspondences across all the scans) Large-scale Mesh Registration distances of markers keep Laplacian coordinates closet points
marker mesh registration (generating motion capture markers with every face scan) face scans registration (generating dense, consistent surface correspondences across all the scans) Large-scale Mesh Registration Fine-scale Mesh Registration
marker mesh registration (generating motion capture markers with every face scan) face scans registration (generating dense, consistent surface correspondences across all the scans) Region-based mesh registration: registrate geometric details between one scan and its three closest scans Minimize the geometry features of source and target meshes
marker mesh registration (generating motion capture markers with every face scan) face scans registration (generating dense, consistent surface correspondences across all the scans) • Choose the mean curvature at each vertex as geometry features • Using optical flow after cylindrical image mapping
face scans registration (generating dense, consistent surface correspondences across all the scans) facial performance reconstruction by blend shape (T frames,240 fps, 80K Mesh) • Using blend shape model
CONTRIBUTIONS • Automatically determines a minimal set of face scans required for facial performance reconstruction • A two-step registration process that builds dense, consistent surface correspondences across all the face scans
CONCLUSION AND FUTURE WORK • Combines the power of motion capture and 3D scanning technology • Match both the spatial resolution of static face scans and the acquisition speed of motion capture systems • Modifying the data for new applications • Methods dealing with the high-fidelity facial data • Eye and lip movements