1 / 57

Human Face Modeling and Animation Example of problems in application of multimedia signal processing

Human Face Modeling and Animation Example of problems in application of multimedia signal processing. Introduction. Face is very important INTERFACE for people It is an interface which is very natural, very reach

sylvester
Download Presentation

Human Face Modeling and Animation Example of problems in application of multimedia signal processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Human Face Modeling and Animation Example of problems in application of multimedia signal processing

  2. Introduction • Face is very important INTERFACE for people • It is an interface which is very natural, very reach in signalling. We know we have special structures in the brain for processing face information The problem of this lecture: We would like to have computer interfaces using face talking to us. That could be for example a kind of assistant to us. HOW TO MAKE THIS?

  3. Application Areas • Advanced user interfaces: Social agents and avatars • Education: Pedagogical agents • Medicine: Facial tissue surgical simulation • Criminalistics: Forensic face recognition • Teleconferencing: Transmitting & receiving facial images • Game industry: Realistic games • Media and art: Movies: Avatar, Alice in Wonderland…. faces in part realistic, Shrek – faces non realistic

  4. FACE IS COMPLICATED Face anatomy: • Face muscles • More than 200 • Shape varying • Connected to bones & tissues • Skin

  5. Facial Modeling • The idea: USE 3D graphics to render face • Involves determination of geometric descriptions & animation capabilities. • Face has a very complex, flexible 3-D surface. It has color and texture variations and usually contains creases and wrinkles. • Methods for effective animation and efficient rendering: volume representation, surface representation and face features.

  6. Methods for Effective Animation and Efficient Rendering • Volume representation: includes constructive solid geometry (CSG), volume element (voxel) arrays and aggregated volume elements such as octrees. • Surface representation: includes implicit surfaces, parametric surfaces and polygonal surfaces.

  7. Surface Representation • Implicit surfaces: defined by a function F(x,y,z)=0. • Parametric surfaces: generated by three functions of two parametric variables, one function for each of the spatial dimensions. Include B-splines, Beta-splines, Bezier patches, NURBS.

  8. Polygonal surface Face mask Detailed features Surface Representation • Polygonal surfaces: include regular polygonal meshes and arbitrary polygon networks. • Face Features: is usually the sum of many parts and details. A complete approach is to integrate the facial mask, facial features details and hair into models of the complete head.

  9. Techniques Used for Surface Measurements Specifying the 3-D surface of a face is a significant challenge. The general approaches to specify the 3-D face are: • 3-D digitizer • Photogrammetric measurement • Modeling based on laser scans

  10. Plastic sculpture is digitized as meshes 3-D Digitizer • Special hardware devices that rely on mechanical, electro- magnetic or acoustic measurements to locate positions in space. • Works best with sculptures or physical models that do not change during the measuring process unlike real faces. • Used to sequentially measure the vertex position for each sculpture.

  11. Photogrammetric Measurement • This method captures facial surface shapes and expressions photographically. • Basic idea is to take multiple simultaneous photographs of the face, each from different views. • If certain constraints are observed when taking the photographs, the desired 3-D surface data points can be computed based on measured data from the multiple 2-D views. • Then a 3-D coordinate system with a coordinate origin near the center of the head is established.

  12. Procedure of Photogrammetric Measurement Photographs from different views Setup of cameras and projectors Mesh

  13. Modeling Based on Laser Scans • Laser based surface scanning devices can be used to measure faces. • These devices typically produce a very large regular mesh of data values in a cylindrical coordinate system. • In addition to the range data, a color camera captures the surface color of the head, so for each range point the corresponding color is also available. This is useful for texture mapping in the final rendering of the talking head.

  14. Procedure of Modeling Based on Laser Scans Scanned surface range data Scanned surface color data An adapted mesh overlaid on its surface color data

  15. Face Modeling 1. Geometrical graphic model as generic face model to obtain texture & range data of a face 2. Cyber scanner and range data viewed in 3-D 3. Feature points are selected and face model is automatically warped to produce a customized face model

  16. The end result – is it looking realistic?

  17. Facial Animation To make face ”alive. Several approaches to facial animation: • Interpolation • Performance driven animation • Direct parametrization • Pseudo muscle based animation • Muscle based animation

  18. Interpolation • Most widely used technique. • Uses key-framing approach. • The desired facial expression is specified for a certain point in time and then again for another point in time some number of frames later. • A computer algorithm then generates the frames in between these key frames. • Facial animation is achieved by digitizing the face in each of the several different expressions and then interpolating between these expressions. • Key-frame animation requires complete specification of the model geometry for each key facial expression.

  19. Interpolation Between Expressions Surprised Sad Worried ActualActual Interpolated

  20. Performance Based Animation • Involves using information derived by measuring real human actions to drive synthetic characters. • Often uses interactive input devices, such as gloves, instrumented body suits and laser or video based motion tracking systems. • One of the approaches is expression mapping. In this different expressions and phoneme poses are digitized directly from a real person.

  21. Examples of Performance Based Animation Human facial movements and phonemes are digitized to be used by animated character

  22. Direct Parameterized Model • Sets of parameters are used to define facial conformation and to control facial expressions. • Uses local region interpolations, geometric transformations and mapping techniques. • The three basic ideas used in this model are: • 1. fundamental concept of parameterization • 2. development of appropriate descriptive parameter set 3. development of parameterized model coupled with image synthesizer to create the desired image.

  23. Pseudo Muscle Based Facial Animation • Muscle actions are simulated using geometric deformation operators. • Facial tissue dynamics are not simulated. • Includes abstract muscle action and free form deformation.

  24. Muscle Based Animation • Uses a mass-and-spring model to simulate facial muscles. • Muscles are of two types: linear muscles that pull and elliptic muscles that squeeze. • Muscle parameters: muscle vector and zone of muscle effect.

  25. Linear and Elliptical Muscles Linear muscle Muscle parameters 1. muscle vector 2. zone of muscle effect Elliptical muscle

  26. Modeling the Primary Facial Expressions • Following are the basic characterization of facial expressions that are considered to be generic to the human face: Happiness, Anger, Fear, Surprise, Disgust and Sadness. • Facial Action Coding System (FACS) is a widely used notation for the coding of facial articulation. • FACS describes 66 muscle actions (some muscle blends) which in combination can give rise to thousands of possible facial expressions. • Examples of facial expressions are shown in the following slide.

  27. Examples of Facial Expressions Neutral face Anger Happiness Surprise Fear Disgust

  28. Facial Image Synthesis The next step is to actually generate the sequence of facial images that form the desired animation. Image synthesis includes three major tasks: • Transforming the geometric model and its components into the viewing coordinate system. • Determining which surfaces are visible from the viewing position. • Computing the color values for each image pixel based on the lighting conditions and the properties of the visible surfaces.

  29. Basic Idea • Face Tracker • Piece-wise Bezier Volume Deformation Face Model • Purpose: To design FACS motion units

  30. Warping in 3D • Face expression change (a) Bézier controlling mesh (b) An expression “smile”.

  31. Face animation is too hard? • Try to take REAL face an use it for animation • This may be much easier than generation of complete synthetic natural face BUT HOW TO DO IT?

  32. Let’s say our goal is to generate natural face talking and controlled by computer To do it: We take video of real person and we will change lip movements and face expression according to the speech

  33. How to do it: Analysis Stage • Given video of the subject speaking, extract mouth position and lip shape • Hand label training images: • 34 points on mouth (20 outer boundary, 12 inner boundary, 1 at bottom of upper teeth, 1 at top of lower teeth) • 20 points on chin and jaw line • Morph training set to get to 351 images

  34. Audio Analysis • Want to capture visual dynamics of speech • Phonemes are not enough • Consider coarticulation • Lip shapes for many phonemes are modified based on phoneme’s context (e.g. /T/ in “beet” vs. /T/ in “boot”)

  35. Audio Analysis (continued) • Segment speech into triphones • e.g. “teapot” becomes /SIL-T-IY/, /T-IY-P/, /IY-P-AA/, /P-AA-T/, and /AA-T-SIL/) • Emphasize middle of each triphone • Effectively captures forward and backward coarticulation

  36. Audio Analysis (continued) • Training footage audio is labeled with phonemes and associated timing • Use gender-specific segmentation • Convert transcript into triphones

  37. Synthesis Stage • Given some new speech utterance • Mark it with phoneme labels • Determine triphones • Find a video example with the desired transition in database • Compute a matching distance to each triphone: error = αDp + (1- α)Ds

  38. Viseme Classes • Cluster phonemes into viseme classes (speech units + face movements) • Use 26 viseme classes (10 consonant, 15 vowel): (1) /CH/, /JH/, /SH/, /ZH/ (2) /K/, /G/, /N/, /L/ … (25) /IH/, /AE/, /AH/ (26) /SIL/

  39. Lip Shape Distance • Ds is distance between lip shapes in overlapping triphones • Eg. for “teapot”, contours for /IY/ and /P/ should match between /T-IY-P/ and /IY-P-AA/ • Compute Euclidean distance between 4-element vectors (lip width, lip height, inner lip height, height of visible teeth) • Solution depends on neighbors in both directions (use DP)

More Related