1 / 26

Matthias Wimmer Technische Universitat M nchen Bruce MacDonald, Dinuka Jayamuni, and Arpit Yadav Department of Electr

Outline. MotivationBackgroundFacial expression recognition methodResults on a data setResults with a robot (the paper contribution)Conclusions. Motivation: Goal. Our Robotics group goals:To create mobile robotic assistants for humansTo make robots easier to customize and to program by end us

gada
Download Presentation

Matthias Wimmer Technische Universitat M nchen Bruce MacDonald, Dinuka Jayamuni, and Arpit Yadav Department of Electr

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. 19 February 2008

    2. Outline Motivation Background Facial expression recognition method Results on a data set Results with a robot (the paper contribution) Conclusions

    3. Motivation: Goal Our Robotics group goals: To create mobile robotic assistants for humans To make robots easier to customize and to program by end users To enhance interactions between robots and humans Applications: healthcare, eg aged care Applications: agriculture (eg Ian's previous presentation) (Lab visit this afternoon)?

    4. Motivation: robots in human spaces Increasingly, robots live in human spaces and interact closely

    5. Motivation: close interactions RI-MAN

    6. Motivation: different types of robot Robots have many forms; how do people react? Pyxis HelpMate SP Robotic Courier System Delta Regional Medical Centre, Greenville, Mississippi

    7. Motovation: different robot behaviour

    8. Motivation: supporting the emotion dimension Robots must give support with psychological dimensions home and hospital help therapy companionship We must understand/design the psychology of the exchange Emotions play a significant role Robots must respond to and display emotions Emotions support cognition Robots must have emotional intelligence Eg during robot assisted learning Eg security screening robots Humans’ anxiety can be reduced if a robot responds well [Rani et al, 2006]

    9. Motivation: functionality of emotion response Not just to be “nice”; the emotion dimension is essential to effective robot functionality [Breazeal]

    10. Motivation: robots must distinguish human emotional state However, recognition of human emotions is not straightforward Outward expression versus internal mood states People smile when happy AND they are interacting with humans Olympic medalists don’t smile until the presenter appears (eg 1948 football team) Ten pin bowlers smile when they turn back to their friends

    11. Motivation: deciphering human emotions Self-reports are more accurate than observer ratings Current research attempts to decipher human emotions facial expressions speech expression heart rate, skin temperature, skin conductivity

    12. Motivation: Our focus is on facial expressions Despite the limitations, we focus on facial expression interpretation from visual information. Portable, contactless Needs no special nor additional sensors Similar to humans' interpretation of emotions (which is by vision and speech)? No interference with normal HRI

    13. Background Six universal facial expressions (Ekman et al.) Laughing, surprised, afraid, disgusted, sad, angry Cohn-Kanade-Facial-Expression database (488 sequences, 97 people) Performed Exaggerated Determined by Shape Muscle motion Further persons: 82, 77, 32!!!, 26?!?, 11!?! Further persons: 82, 77, 32!!!, 26?!?, 11!?!

    14. Background: Why are they difficult to estimate? Participants could annotate as long as they wanted.Participants could annotate as long as they wanted.

    15. Background Typical FER process [Pantic & Rothkrantz, 2000]

    16. Background: Challenges 1. Face detection and 2. feature extraction challenges: Varying shape, colour, texture, feature location, hair Spectacles, hats Lighting conditions including shadows 3. Facial expression classification challenges: Machine learning

    17. Background: related work Cohen et al: 3D wireframe with 16 surface patches Bezier volume parameters for patches Bayesian network classifiers HMMs model muscle activity over time Bartlett et al: Gabor filters using AdaBoost, Support Vector 93% accuracy on Cohn-Kanade DB Is tuned to DB

    18. Background: challenges for robots Less constrained face pose and distance from camera Human may not be facing the robot Human may be moving More difficulty in controlling lighting Robots move away! Real time result is needed (since the robot moves)

    19. Facial expression recognition (FER) method Matt’s model based approach

    20. FER method Cootes et al statistics based deformable model (134 points) Translation, scaling, rotation Vector b of 17 face configuration parameters Rotate head b1, open mouth b3, change gaze direction b10

    21. FER method: Model-based image interpretation The model The model contains a parameter vector that represents the model’s configuration. The objective function Calculates a value that indicates how accurately a parameterized model matches an image. The fitting algorithm Searches for the model parameters that describe the image best, i.e. it minimizes the objective function. What is MODEL-BASED IMAGE INTERPRETATION all about? Interpret images with a GEOMETRIC model.“ Models contains a vector of parameters. It affects - position - pose - the deformation.What is MODEL-BASED IMAGE INTERPRETATION all about? Interpret images with a GEOMETRIC model.“ Models contains a vector of parameters. It affects - position - pose - the deformation.

    22. FER method Two step process for skin colour: see [Wimmer et al, 2006] Viola & Jones technique detects a rectangle around the face Derive affine transformation parameters of the face model Estimate b parameters Viola & Jones repeated Features are learned to localize face features Objective function compares an image to a model Fitting algorithm searches for a good model

    23. FER method: learned objective function Reduce manual processing requirements by learning the objective function [Wimmer et al, 2007a & 2007b] Fitting method: hill-climbing

    24. FER method Facial feature extraction: Structural (configuration b) and temporal features (2 secs) Expression classification Binary decision tree classifier is trained on 2/3 of data set

    25. Results on a dataset

    26. Results on a robot B21r robot Some controlled lighting Human about 1m away 120 readings of three facial expressions 12 frames a second possible Tests at 1 frame per second

    27. Conclusions Robots must respond to human emotional states Model based FER technique (Wimmer) 70% accuracy on Cohn-Kanade data set (6 expressions) 67% accuracy on a B21r robot (3 expressions) Future work: better FER is needed Improved techniques Better integration with robot software Improve accuracy by fusing vital signs measurements

More Related