380 likes | 479 Views
Institute of Systems and Robotics. http://paloma.isr.uc.pt. ISR – Institute of Systems and Robotics University of Coimbra - Portugal. Human-Robot Interaction. Determining face orientation for a robot able to interpret facial expressions Carlos Simplício, José Prado and Jorge Dias
E N D
Institute of Systems and Robotics http://paloma.isr.uc.pt ISR – Institute of Systems and Robotics University of Coimbra - Portugal
Human-Robot Interaction Determining face orientation for a robot able to interpret facial expressions Carlos Simplício, José Prado and Jorge Dias Presented by José Prado 2010 03 - 10
Human-Robot Interaction Summary Introduction (Interactive Mobile Robots) Autonomous Mobile Agent (AMA) Robotic System Controller (RSC) Face Pose Identification System (FPIS) Automatic Facial Expressions Recognition System (AFERS) (Structure of a DBN classifying facial expressions)
Human-Robot Interaction Summary Introduction (Interactive Mobile Robots) Autonomous Mobile Agent (AMA) Robotic System Controller (RSC) Face Pose Identification System (FPIS) Automatic Facial Expressions Recognition System (AFERS) (Structure of a DBN classifying facial expressions)
Introduction We are developing a service/assistant robot, an Autonomous Mobile Agent (AMA). This agent, will be used in the context of assisted ambiance. The global project addresses the emergent tendencies to develop new devices for assistance and services.
Introduction • Human beings express their emotional states through: • facial expressions • gestures • voice • etc. • We propose: • a technique to determine face orientation based in human face symmetry; • a DBN to classify human facial expressions.
Introduction The AMA must observe and react according facial expressions of a person. Facial expressions recognition becomes easier if done in frontal face images. The robotic system will be used to follow the human being movements and keeps always a frontal face.
Summary Introduction (Interactive Mobile Robots) Autonomous Mobile Agent (AMA) Robotic System Controller (RSC) Face Pose Identification System (FPIS) Automatic Facial Expressions Recognition System (AFERS) (Structure of a DBN classifying facial expressions)
Summary Introduction (Interactive Mobile Robots) Autonomous Mobile Agent (AMA) Robotic System Controller (RSC) Face Pose Identification System (FPIS) Automatic Facial Expressions Recognition System (AFERS) (Structure of a DBN classifying facial expressions)
Robotic System Controller - RSC • Robotic Platform movements: • Longitudinal translations; • Transversal translations; • Rotations. • Rotations correspond to an arc of circle centered in the human being. • Objective is to follow the rotation done by the human being, getting always an image of a frontal face. • Robotic Head can move in synchronization. 2 1
Summary Introduction (Interactive Mobile Robots) Autonomous Mobile Agent (AMA) Robotic System Controller (RSC) Face Pose Identification System (FPIS) Automatic Facial Expressions Recognition System (AFERS) (Structure of a DBN classifying facial expressions)
Face Pose Identification System - FPIS In a perfect symmetric image, pixels positioned symmetrically have the same gray-level value: difference is zero. We use this principle to verify if an image is symmetric: frontal face. Example 1 Example 2
Face Pose Identification System - FPIS • In a perfect symmetric image, • pixels positioned symmetrically • have the same gray-level value: difference is zero. • Problems: • By nature, human faces are not perfectly symmetric; • There are shadows. • But it works!!!
Face Pose Identification System - FPIS Define a vertical axis (always in the same position); Calculate differences of gray-levels between symmetric (position) pixels. Build Normalized Gray-level Difference Histogram (NGDH). In a frontal face, the vertical axis bisects the face and the information collected in the NGDH is strongly concentrated near the mean. Else, the information is scattered along the NGDH. NGDH with scattered information NGDH with concentrated information
Face Pose Identification System - FPIS Algorithm: Find and extract face region in the image; Define a vertical axis (dividing the region in two parts with equal number of pixels); Synthesize face images - use vertical axis to perform a 3D transformation (rotation); Synthesized images are “hypotheses” to find the face out-of-plane rotation; Built NGDH's; Find the pseudomean – number of occurrences in a narrow region around the NGDH's mean; Synthesized image with great pseudomean has the frontal face!!
Face Pose Identification System - FPIS Algorithm: Find and extract face region in the image; Define a vertical axis (dividing the region in two parts with equal number of pixels); Synthesize face images - use vertical axis to perform a 3D transformation (rotation); Synthesized images are “hypotheses” to find the face out-of-plane rotation; Built NGDH's; Find the pseudomean – number of occurrences in a narrow region around the NGDH's mean; Synthesized image with great pseudomean has the frontal face!!
Face Pose Identification System - FPIS Algorithm: Find and extract face region in the image; Define a vertical axis (dividing the region in two parts with equal number of pixels); Synthesize face images - use vertical axis to perform a 3D transformation (rotation); Synthesized images are “hypotheses” to find the face out-of-plane rotation; Built NGDH's; Find the pseudomean – number of occurrences in a narrow region around the NGDH's mean; Synthesized image with great pseudomean has the frontal face!!
Face Pose Identification System - FPIS Algorithm: Find and extract face region in the image; Define a vertical axis (dividing the region in two parts with equal number of pixels); Synthesize face images - use vertical axis to perform a 3D transformation (rotation); Synthesized images are “hypotheses” to find the face out-of-plane rotation; Built NGDH's; Find the pseudomean – number of occurrences in a narrow region around the NGDH's mean; Synthesized image with great pseudomean has the frontal face!!
Face Pose Identification System - FPIS Rotation -30º Result -30º Rotation 0º Result 0º Original Angle = 0º Rotation +30º Result +30º
Face Pose Identification System - FPIS Rotation -30º Result -60º Rotation 0º Result -30º Original Angle = -30º Rotation +30º Result 0º
Face Pose Identification System - FPIS Rotation -30º Result 0º Rotation 0º Result +30º Original Angle = +30º Rotation +30º Result +60º
Summary Introduction (Interactive Mobile Robots) Autonomous Mobile Agent (AMA) Robotic System Controller (RSC) Face Pose Identification System (FPIS) Automatic Facial Expressions Recognition System (AFERS) (Structure of a DBN classifying facial expressions)
Facial Expressions We only consider five emotional states. Each emotional state has a characteristic facial expression. A facial expression is a set of Action Units (AUs). Paulo José Carlos Carlos Alex AUs are “distortions” of facial features. Ex: lips smile. anger fear happy neutral sad
DBN's Structure Level 1 Node (variable) that probabilistically reflect the existence of an Emotional State. • Emotional States considered are: • anger • fear • happy • sad • neutral • other
DBN's Structure Level 2 Nodes (variables) that probabilistically reflect the existence of a facial expression. • Expressions considered are: • anger • fear • happy • sad • neutral ...
DBN's Structure Level 3 11 AUs are considered in each facial expression.
DBN's Structure Level 4 Nodes (variables) that probabilistically reflect the strength of the evidences (positive or negative). ...
DBN's Structure Level 5 Here, information is propagated between time slices. These nodes (variables) combine / fuse probabilistically, through inertia, information coming from the low level in present time slice with that from the previous instant. ... ...
DBN's Structure Level 6 Nodes (variables) collecting the evidences provided by the sensors. ...
Conclusions • It was developed: • An architecture for an Autonomous Mobile Agent; • A Face Orientation Identification Technique; • A structure for a DBN. • The Face Pose Identification Technique has a good performance and is very fast. • Classification of facial expressions using positive and negative evidences is very promising.
END Thanks for your attention!!! Questions ?