570 likes | 585 Views
Explore the development of a real-time facial expression modelling system, focusing on virtual camera, 3D face generation, and animation features for enhanced user experience.
E N D
LYU0603 A Generic Real-Time Facial Expression Modelling System Supervisor: Prof. Michael R. Lyu Group Member: Cheung Ka Shun (05521661) Wong Chi Kin (05524554)
Outline • Previous Work • Objectives • Work in Semester Two • Review of implementation tool • Implementation • Virtual Camera • 3D Face Generator • Face Animation • Conclusion • Q&A
Previous Work • Face analysis • Detect the facial expression • Draw corresponding model
Objectives • Enrich the functionality of web-cam • Make net-meeting more interesting • Users are not required to pay extra cost on specific hardware • Extract human face and approximate face shape
Work in Semester Two • Virtual Camera • Make Facial Expression Modelling to be available in net-meeting software • Face Generator • Approximate face shape • Generate a 3D face texture • Face Animation • Animate the generated 3D face • Convert into standard file format
Review - DirectShow • Filter graph • Source Transform Renderer
Review - Direct3D • Efficiently process and render 3-D scenes to a display, taking advantage of available hardware • Fully Compatible with DirectShow
Virtual Camera • Focus on MSN Messenger
Virtual Camera • Two components • 3D model as output • Face Mesh Preview
Virtual Camera • Actually it is a source filter
Virtual Camera • Inner filter graph in virtual camera
Demonstration • We are going to play a movie clip which demonstrate Virtual Camera
3D Face Generator • Aims: To approximate the human face and shape • Comprise two parts
FaceLab • Adopted from the face analysis project of Zhu Jian Ke, CUHK CSE Ph.D. Student • The analysis is decomposed into training and building part • The whole training phase is made up of three steps
In 2D face modelling, 100 feature points are sufficient to represent the face surface FaceLab – Data Acquisition • To acquire human face structure data • However, thousands of point are demanded to describe the complex structure of human face • It can be acquired either by 3D scanner or computer vision algorithm
FaceLab – Data Registration • To normalize the 3D data into same scale with correspondences • Problem • The most accurate way is to compute 3D optical flow • Commercial 3D scanners and 3D registration are computed with specific hardware
FaceLab • To simplify the process, it is decided to use software to generate the human face data. • Each has a set of 752 3D vertex data to describe the shape of face
FaceLab – Shape Model Building • A shape is defined as a geometry data by removing the translational, rotational and scaling components. • The object containing N vertex data is represented as a matrix below
FaceLab – Shape Model Building • The set of P shapes will form a point cloud in 3N-dimensional space which is a huge domain. • A conventional principle component analysis (PCA) is performed.
FaceLab – Shape Model Building PCA Implementation • It performs an orthogonal linear transform • A new coordinate system which points to the directions of maximum variation of the point cloud. • In this Implementation, the covariance method is used.
FaceLab – Shape Model Building PCA Implementation • Step 1: Compute the empirical mean which is the mean shape along each dimension
FaceLab – Shape Model Building PCA Implementation • Step 2: Calculate the covariance matrix C • The axes of the point cloud are collected from the eigenvectors of the covariance matrix.
FaceLab – Shape Model Building PCA Implementation • Step 3: Compute the matrix of eigenvectors V where D is the eigenvalue matrix of C • The eigenvalue represents the distribution of the objects data’s energy
FaceLab – Shape Model Building PCA Implementation • Final Step: Represent the resulted shape model as where ms are the shape parameters • Adjusting the value of the shape parameters can generate a new face model by computing
FaceLab – Shape Model Building PCA Implementation • An extra step: Select a subset of the eigenvectors • The eigenvalue represents the variation of the corresponding axis • The first seven columns are used in the system and achieve a majority of the total variance.
FaceLab – Render the face model • The resulted data set is a 3D face mesh data • Use OpenGL to render it
System Overview of Face Texture Generator Facial Expression Modelling Face Texture Generator
Face Texture Generator • Face texture extraction • Three Approaches • Largest area triangle aggregation • Human-defined triangles aggregation • Single photo on effect face
Largest area triangle aggregation Right face Left face Front face
Largest area triangle aggregation • Result
Human-defined triangles aggregation • Divide the face mesh into three parts • Define the particular photo to be sampled in triangles in each region • Reduce fragmentation
Human-defined triangles aggregation • Redefine the face mesh – Effect Face
Human-defined triangles aggregation • Result
Single photo on effect face • Similar to Human-defined triangles aggregation • Use a single photo for pixel sampling • Use Effect Face as outline
Single photo on effect face • Result
Dynamic Texture Generation • To get back the rendered data from the video display card
Dynamic Texture Generation • Lock the video display buffer
Dynamic Texture Generation • Common buffer content is changed • Update the texture buffer to reflect the changes immediately
Dynamic Texture Generation • From 2D face mesh to 3D face mesh
Demonstration • We are going to play a movie clip which demonstrates Face Generator
Generate simple animation Looking at the mouse cursor • Feature points provide sufficient information to locate the eye • The two eyes will form a triangle planar with the mouse cursor