160 likes | 288 Views
ADAPT IST-2001-37173. Artificial Development Approach to Presence Technologies. 2 nd Review Meeting Munich, June 7-9 th , 2004. Consortium. Total cost: 1.335.141 € - Community funding: 469.000 € Project start date: October 1 st , 2002 Project duration: 36 months. Goal. We wish to…
E N D
ADAPTIST-2001-37173 Artificial Development Approach to Presence Technologies 2nd Review Meeting Munich, June 7-9th, 2004
Consortium • Total cost: 1.335.141 € - Community funding: 469.000 € • Project start date: October 1st, 2002 • Project duration: 36 months
Goal We wish to… …understand the process of building a coherent representation of visual, auditory, haptic, and kinesthetic sensations process development process dynamic representation Perhaps, once we “know” how it works, we can “ask” a machine to use this knowledge to elicit the sense of presence
So, we are asking… How do we represent our world and, in particular, how do we represent the objects we interact with? Our primary mode of interaction with objects is through manipulation, that is, by grasping objects!
Two-pronged approach • Study how infants do it • Implement a “similar” process in an artificial system Learning by doing: modeling abstract principles build new devices
Scientific prospect • From the theoretical point of view: • Studying the nature of “representation” • From development: developmental path • Interacting with objects: multi-sensory representation, object affordances • Interpreting others/object interaction: imitation • From embodiment and morphology • Why do we need a body? How morphology influences/supports computation? • Computational architecture • How can an artificial system learn representations to support similar behaviors?
Vision Touch Streri & Gentaz (2003, 2004) Reversible cross-modal transfer between hand and eyes in newborn infants Transfer of shape is not reversible
6-month-olds detect a violation of intermodality between face and voice A teleprompter device allows to delay independently voice or image
Grasping: morphological computation Robot hand with: - elastic tendons - soft finger tips (developed by Hiroshi Yokoi, AI Lab, Univ. of Zurich and Univ. of Tokyo) Result: - control of grasping - simple “close” - details: taken care of by morphology/materials
…how can the robot grasp an unknown object ? • Use a simple motor synergy to flex the fingers and close the hand • Exploit the intrinsic elasticity of the hand; the fingers bend and adapt to the shape of the object
Result of clustering • 2D Self Organizing Map (100 neurons) • Input: proprioception (hand posture, touch sensors were not used) The SOM forms 7 classes (6 for the objects plus 1 for the no-object condition)
Example: learning visual features • Only one modality (non-overlapping areas of visual field) guide feature extraction of each other • Learn invariant features from spatial context (it is well known that temporal context can be used for learning these features)
Future work • Continue and complete ongoing experiments • Experiment on affordant vs. non-affordant use of objects (CNRS, UGDIST) • Investigation on cross-modal transfer in newborn infants (CNRS) • Experiments on the robot (UGDIST, UNIZH) • Learning affordances • Learning visuo-motor features by unsupervised learning • Feature extraction on videos showing mother-infant interaction
Epirob04 Genoa – August 25-27, 2004 http://www.epigenetic-robotics.org Invited speakers: Luciano Fadiga Dept. of Biomedical Sciences, University of Ferrara, Italy Claes von Hofsten Dept. of Psychology, University of Upssala, Sweden Jürgen Konczak Human Sensorimotor Control Lab, University of Minnesota, USA Jacqueline Nadel CNRS, University Pierre & Marie Curie, Paris, France