350 likes | 518 Views
Imitation and Social Intelligence for Synthetic Characters Daphna Buchsbaum, MIT Media Lab and Icosystem Corporation Bruce Blumberg, MIT Media Lab. Socially Intelligent Characters and Robots. Able to learn by observing and interacting with humans, and each other
E N D
Imitation and Social Intelligence for Synthetic CharactersDaphna Buchsbaum, MIT Media Lab and Icosystem CorporationBruce Blumberg, MIT Media Lab
Socially Intelligent Characters and Robots • Able to learn by observing and interacting with humans, and each other • Able to interpret other’s actions, intentions and motivations - characters with Theory of Mind • Prerequisite for cooperative behavior
Max and Morris • Max watches Morris using synthetic vision • Can recognize and imitate Morris’s movements, by comparing them to his own movements (using his own movements as the model/example set) • Uses movement recognition to bootstrap identifying simple motivations and goals and learning about new objects in the environment
© Infant Imitation • These interactions may help infants learn relationships between self and other • ‘like me’ experiences • Simulation Theory
© Simulation Theory • “To know a man is to walk a mile in his shoes” • Understanding others using our own perceptual, behavioral and motor mechanisms • We want to create a Simulation Theory-based social learning system for synthetic characters
Motor Representation: The Posegraph • Nodes are poses • Edges are allowable transitions • A motor program generates a path through a graph of annotated poses • Paths can be compared and classified Related Work: Downie 2001 Masters Thesis; Arikan and Forsyth, SIGGRAPH 2002; Lee et. al., SIGGRAPH 2002
Motor Representation: The Posegraph • Multi-resolution graphs • Nodes are movements • Blending variants of ‘same’ motion
Synthetic Vision • Graphical camera captures Max’s viewpoint • Enforces sensory honesty (occlusion)
Synthetic Vision • Key body parts are color-coded • Max locates them, and remembers their position relative to Morris’s root node. • People watching a movement attend to end-effector locations Root node
© Parsing Motion • Many different movements start and end in the same transitionary poses (Gleicher et. al., 2003) • These poses can be used as segment markers • Related Work: • Bindiganavale and • Badler, CAPTECH 1998; • Fod, Mataric and • Jenkins, Autonomous • Robots 2002; • Lieberman, • Masters Thesis 2004;
Movement Recognition • Identify the best matching path through the posegraph • Check if this path closely matches an already existing movement
Action Identification Top-level motivation systems Do-until Trigger Action Object
Representation of Action: Action Tuple Context in which the action can be performed Trigger Optional object to perform action on Object Anything from setting an internal variable to making a motor request. Action Context in which action is completed Do-until
ActionlIdentification “Should I” trigger “can I” trigger
ActionIdentification Find bottom-level actions that use matched movements
ActionIdentification Find bottom-level actions that use matched movements
ActionIdentification Find all paths through The action hierarchy To the matching action
ActionIdentification Check “can-I” triggers, see which actions are possible.
ActionIdentification Check “can-I” triggers, see which actions are possible.
ActionIdentification Check “can-I” triggers, see which actions are possible.
LearningAbout Objects ? ? ? ?
Contributions:What Max can Do • Parse a continuous stream of motion into individual movement units • Classify observed movements as one of his own • Identify observed actions, using his own action system • Identify simple motivations and goals for an action • Learn uses of objects through observation
Future Work:What Max Can’t Currently Do • Solve the correspondence problem • Imitate characters with non-identical morphology • Doesn’t act on knowledge of partner’s goals - cooperative activity • Currently ignores novel movements
Harder Problems • How do you use your knowledge? • Limits of simulation theory • Intentions vs consequences: The problem of the robot that eats for you • What level of granularity do you attend to: wanting the object vs wanting to eat
Acknowledgements • Members of the Synthetic Characters and Robotic Life Groups at the MIT Media Lab • Advisor: • Bruce Blumberg, MIT Media Lab • Thesis Readers: • Cynthia Breazeal, MIT Media Lab • Andrew Meltzoff, University of Washington • Special Thanks To: • Jesse Gray • Marc Downie