1 / 57

Action Causes Perception Causes Action: From Sensory Substitution to Situated Robots

Action Causes Perception Causes Action: From Sensory Substitution to Situated Robots . Lecture 3+4, Unit 5 NUCOG Seminar: Action, Perception, Motivation Akureyri, Iceland 10.2.-20.2.2006 Marieke Rohde Centre for Computational Neuroscience and Robotics University of Sussex. Recapitulation.

lucia
Download Presentation

Action Causes Perception Causes Action: From Sensory Substitution to Situated Robots

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Action Causes Perception Causes Action:From Sensory Substitution to Situated Robots Lecture 3+4, Unit 5 NUCOG Seminar: Action, Perception, Motivation Akureyri, Iceland 10.2.-20.2.2006 Marieke Rohde Centre for Computational Neuroscience and Robotics University of Sussex

  2. Recapitulation • Situated and Embodied View: • The closed sensorimotor loop • The rejection of (a priori) internal localisation of cognitive function • Sensorimotor coordination as reciprocally causal process. • Empirical research • Perceptual suppleance (sensory substitution) • Change blindness • Delay experiments • Sensorimotor contingencies • Descriptive concepts for a situated view. • Can be used to explain cognitive phenomena and faculties (e.g. perceptual modalities) without localising meaningful cognitive phenomena

  3. This module Tuesday: • History and Motivation • The Importance of Situatedness: Empirical Evidence • A Sensorimotor Account Today: • Robotics • The Question of Value • Conclusion

  4. 4.) Robotics

  5. Shakey • “Shakey was the first mobile robot to reason about its actions” • Shakey implemented the perception, planning, action approach. • Detailed world model • Three levels of complexity http://www.sri.com/about/timeline/shakey.html Video: http://www.ai.sri.com/movies/Shakey.ram

  6. Braitenberg • Cunningly simple • No internal state – yet „cognitive“ behaviour • One controller – radically different behaviours Braitenberg, V.: „Vehicles. Experiments in Synthetic Psychology.“ illustrations by Maciek Albrecht, MIT Press, 1984

  7. The Case of Walking Why is walking so easy for us, and so difficult for robots? (see e.g. Honda’s Humanoid ASIMO: http://world.honda.com/ASIMO/ )

  8. Passive Dynamics • Passive dynamic walking at Cornell: http://www-personal.engin.umich.edu/~shc/movies/passive_angle.mov • Slowly introducing activity (power) in simulation: http://www.droidlogic.com/sussex/dphil/movies/fullspine3_front.mov • Take inspiration from airplanes: Gradually add control to gliding.

  9. A Lesson from Robotics • In Shakey, a theoretical model was „acid tested“ (with little success) • The actual problems had not been recognised. • A controller that is no controller at all outperforms Shakey effortlessly • Close coupling instead of „dead reckoning“ • Detail through detailed representation. • Classical approaches focus on what people are bad at and computers are good at (logics, mathematics, chess...). They fail to account for what people are good at, but computers are bad at. • The rise of Behaviour Based Robotics: e.g. Brooks: „Intelligence Without Reason“, subsumption architectures.

  10. Evolutionary Robotics

  11. Advantages • Integrated sensorimotor systems • Close coupling between agent and environment (which is typically bypassed or modelled very poorly). • Control (and minimise) prior assumptions (prejudices) • About what internal structure is necessary to solve a task • About what kind of functional decomposition underlies the mastery of a task • About which strategy is applied to solve a task • Goes beyond human ingenuity, particularly with respect to complex nonlinear dynamics

  12. Minimally Cognitive Behaviour Beer, R.D. (2003). • Minimally cognitive Behaviour – what and why • Raises genuine cognitive interest • Minimal complexity on which we can build up systematically • Dynamical Explanation of brain, body, world interaction • „Intellectual Warm-Up“, „Frictionless Brains“

  13. Categorical Perception • Task • Circular Agent • Recurrent Neural Network Controller (CTRNN) • Move left and right • Distance sensor array • Objects fall from the sky • Catching Circles, avoiding Diamonds • (symmetry)

  14. Behavioural Explanation • Foveate Object • Scan Left and Right • Circle: scanning movements smaller and smaller • Diamond: large avoidance movement

  15. What the agent sees...

  16. „Carrot on a stick“

  17. „Psychophysics“ • Labeling and Discrimination • Discrimination Criteria (Width!)

  18. Psychophysics • When is the decision made?

  19. Dynamical Explanations • Hardcore Mathematics • Dynamics of agent environment system

  20. Memory • Izquierdo-Torres & Di Paolo (2005): • Is a reactive agent capable of only solving reactive tasks? • Reactive task: what is to do is immediately obvious from the sensory data

  21. Memory • Izquierdo-Torres & Di Paolo (2005): • Same Task, same settings (symmetry etc.) • Feed Forward network without decay • Perfect mastery, even with respect to „psychophysics“ • An embodied and situated agent is never purely reactive: Agents modify their position with respect to the objects and thereby partially determine their sensory perception at the next time step

  22. More Cases • I will just brush over these...

  23. Active Vision • Coevolution of active vision and feature selection (Floreano et Al. 2004) • A feedforward network masters complex behaviour relying on vision • Extension: Development under motor disruption (Floreano et Al. 2004b) • Analogous to Held‘s Kitten caroussel

  24. Learning Dynamics • Tuci et Al. (2002) • Recurrent neural networks are dynamical networks, i.e. They have neural state • Neural network learning is normally thought to happen through a different process (i.e. Synaptic learning) • Learning in a fixed weight neural network • Floor sensor, light sensor: find out at the beginning of 14 trials if you are in a „landmark near“ or „landmark far“ environment

  25. Communication and Social Interaction • Matt Quinn (2001): • Origins of Communication:Making signal only if it is understood. But: the first time made it will not be understood. • Dedicated Channels? No prior assumptions about how to communicate • Homogenous population allocates roles („leader“, „follower“) • Minimal sensory and motor equipment (Only distance, 5 cm range, noisy)

  26. Where‘s the Talking Robots? • People tend not to be impressed with this • I am not impressed with AIBO, ASIMO, the Sony Humanoid, ... (well engineeringwise, I am impressed) • The biggest issues in current cognitive robotics (e.g. Robocup) are still the ones that we get for free. Timing issues: Blur, Slip, Delays... • Manufacturing robots are normally controlled according to dynamical principles

  27. What Does that Prove? • Following up on David‘s question • „When an ER experiments replicates some cognitive capacity of a human or animal,typically in simplistic and minimal form, what conclusions can be drawn from this?“ (Harvey et Al., 2005)

  28. Answers • Harvey et Al. (2005): Existence proof • sufficient conditions to generate behaviour x • catalysing theoretical re-conceptualizations • Facilitating the production of novel hypotheses • Di Paolo et Al. (2000): Opaque Thought Experiment • Results follow from Premises – but in a non obvious way • Empirical Flavour: must be observed and understood • Go beyond human ingenuity and thus • Make a stronger case • Can uncover novel concepts, relations etc. to be incorporated in a theory

  29. Conclusions so far • What all these experiments show is that behavioural and cognitive phenomena that are typically put in distinct boxes in psychology, can be realised by a system whose mechanic structure is very different (orthogonal) to the structure of the behaviour space. • These mechanisms have a tendency to be more efficient (i.e. computationally cheaper), and exploitation of a close coupling to the environment is part of this advantage. • My question: Why would nature/evolution box up functional mechanisms?

  30. 5.) Values

  31. The Problem • Why is light meaningless to a Braitenberg vehicle? • What constitutes genuine purpose, genuine values, genuine intentionality?

  32. A Look at Biology • Living organisms have genuine purposes. They care for their survival. They have to, otherwise they would not live. • Survival is not „for free“ as it is in the case of the robot. • They cannot be reprogrammed • What is good or bad to them is not down to interpretation. • Can you redefine what is reward and what is punishment for a living organism? • Can there be conventions about what is harmful for a living organism?

  33. I could never put it as nicely… • “the ill person that cannot express himself anymore, animals, yes, even a paramecium that cramps before it is killed by the picric acid dribbled under the cover slip, the saddening look of a limp plant, the foetus that defends itself with hands and feet against the instruments of the doctor - they all present the meaning of that what happens to them. The meaning is explicitly evident in the gestures.“ (Weber, 2003. p. 149, my translation)

  34. Autopoiesis • Maturana and Varela (1980): operational definition of the living. • Definition: a network of processes of production (synthesis and destruction) of components such that these components: • continuously regenerate and realize the network that produces them, and • constitute the system as a distinguishable unity in the domain in which they exist (Weber & Varela, 2002, p. 115)

  35. Is that enough? • Autopoiesis just accounts for robustness. • What is dying? Illness? Stress? • A merely autopoietic system has no reasons to improve the conditions for its continued existence.

  36. Adaptivity (Di Paolo, forthcoming) “a system’s capacity, in some circumstances, to regulate its states and its relation to the environment with the result that, if the states are sufficiently close to the boundary of viability, • Tendencies are distinguished and acted upon depending on whether the states will approach or recede from the boundary and, as a consequence, • Tendencies of the first kind are moved closer to or transformed into tendencies of the second and so future states are prevented from reaching the boundary with an outward velocity.”

  37. Values • Metabolic value, as an end, and a criterion for judgment seems reasonable. • Maybe it is enough to explain the behaviour of the most simple organisms. • However, not all our judgments or all our actions seem to be measurable against metabolic value. • Do all values derive from metabolic value?

  38. Our Definition of Value “We propose to define value as the extent to which a situation affects the viability of a self-sustaining and precarious process that generates an identity” (Di Paolo & Rohde, work in progress) • Note: There is reciprocal causality!!!! • Which other values could there be? • Do non-metabolic values parasite on metabolism? • Could there be values without a metabolism?

  39. Value System Architectures • Edelmann‘s Theory of neuronal group selection (e.g. Edelmann (1989), Sporns & Edelmann (1993)) • Neural circuits are selected due to principles of Darwinian evolution during lifetime • Selection through a value signal • E.g. Reaching: „good“ if hand close to target • Reinforcement learning with internally generated reinforcement signal • Very popular with Pfeifer et Al. (e.g. Pfeifer and Scheier 1999)

  40. The Value System • ``[if] the agent is to be autonomous and situated, it has to have a means of `judging' what is good for it and what is not. Such a means is provided by an agent's value system.'' (Pfeifer and Scheier 1999) • ``already specified during embryogenesis as the result of evolutionary selection upon the phenotype'' (Sporns and Edelmann 1993).

  41. Rephrasing it: • There is a structure that knows what is good and bad (a priori) • The rest of the organism/learning mechanism is ignorant and obeys blindly • This localised structure, by necessity needs to have dedicated input and output channels • Value systems themselves do not learn, they control learning (functional division) • Or if, they learn through a „higher level“ value system (regressus ad infinitum?) A „VISTIGIAL GHOST IN THE MACHINE“!!!! (Rutkowska 1997)

  42. What is wrong with value systems? The principle objection: • Values are arbitrary. • Values are generated seperately. • Values are specified a priori. • Values are not subject to change themselves. • What happened to the reciprocal causality? Think about: sensory substitution, goggle experiments... Think about: social/abstract values and the requirement for „simple criteria of salience and adaptiveness“ ()

  43. These are not genuine values!!!

  44. What else is wrong with value systems? Are value systems good to model values? • Investigation of „pseudo values“ • Models and idealisation: To remove gravity from a model of balloon flight is simply to do away with the original problem we wished to solve. • So what is wrong? • Vulnerability • Generality/Specificity trade-off • Buckpassing explanatory burden • False dilemmas: analoguous to nature/nurture divide (Oyama 1985) • The impossibility of novel values

  45. Damasio • ``[somatic markers] help us thinking, by illuminating some (dangerous or beneficial) options in the right way so they are quickly removed from further reasoning. You can imagine this as an automatic system for the evaluation of predictions'' (my translation from German translation of Descartes error) :$ • ``we are born with the neuronal mechanism necessary to generate somatic states facing certain classes of stimuli - the apparatus of primary emotions.”

  46. Emotion systems • conceived as forming a complementary system to colder or more detached cognitive processes • kind of “early warning system” that directly monitors bodily conditions to generate states that modulate all kinds of interactions and internal dynamics. • Again: • a priori built-in rules • emotional states • functional division between the emotion system and other emotion--free cognitive processes

  47. Just to make this very clear: • Nobody denies that such mechanisms can and possibly do work in situations that rely on ontogenetically or phylogenetically preestablished situations. • Both value and emotion systems provide the other cognitive mechanisms with information of the relative relevance of their activities and future choices. • This cannot account for the generation of novel values.

  48. How else could you model values • Reciprocal Causality between value and value appraising agent: • Dynamics • Plasticity • Situatedness • No functional separation

  49. Evolutionary Robotics • First step: The phototactic homeostatic robot. (Di Paolo 2000, 2003)

  50. Trying to get a grip on „value signals“ • The fitness evolving robot • Evolve robot to perform a task (phototaxis) and a signal that represents a fitness estimate. • The fitness estimate is a standard neuron • Give the robot an environment that requires adaptation (e.g. Sensor swapping) How does the behaviour relate to the „value signal“? How does the „value signal“ relate to the behaviour?

More Related