1 / 33

Walking without a Brain: Exploring Non-Symbolic AI and Dynamic Skills

This lecture discusses the concept of walking without reasoning or computation, highlighting the importance of dynamics in natural walking behavior. It also explores the role of representations in perception and challenges traditional AI approaches. The lecture covers topics such as passive dynamic walking, situated and embodied cognition, and the limitations of representation-based cognitive models.

cgrunewald
Download Presentation

Walking without a Brain: Exploring Non-Symbolic AI and Dynamic Skills

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Non-Symbolic AI Lecture 13 Continuing the themes of Lec 12 First an interlude on Passive Dynamic Walking Then a philosophical discussion on representation. Non-SYmbolic AI Lecture 13

  2. ‘Reasoning all the way down’ The Classical AI approach, obsessed with reasoning and computing, assumed that even something as simple as walking across the room, maintaining one’s balance, required reasoning and computation … … … … “Sense Model Plan Action” … … … Brain controlling muscles But look at this --- Non-SYmbolic AI Lecture 13

  3. ‘Natural walking behaviour', stable to small perturbations, can emerge from 'all body and no brain' ! It is the dynamics that count, whether the dynamics arise with or without a coupled nervous system. Passive Dynamic Walking Dan Jung’s walker moviewww.msc.cornell.edu/~ruinalab/pdw.html "Passive Dynamic Walking", from Tad McGeer Non-SYmbolic AI Lecture 13

  4. Walking without a nervous system Non-SYmbolic AI Lecture 13

  5. For real … Non-SYmbolic AI Lecture 13

  6. .. From in front … Non-SYmbolic AI Lecture 13

  7. Compare with the Honda Humanoid Non-SYmbolic AI Lecture 13

  8. Dynamic skills all the way up? Perhaps rather than ‘Reasoning all the way down’ … … we should think in terms of ‘Dynamic skills all the way up’ Non-SYmbolic AI Lecture 13

  9. Robosapiens Mark Tilden – www.gadgetshop.com £80 Non-SYmbolic AI Lecture 13

  10. New York Feb 2004 Non-SYmbolic AI Lecture 13

  11. Sussex – Eric Vaughan MSc, now DPhil, with Inman Harvey and Ezequiel Di Paolo Initially, using ODE (a physics simulator), to simulate Passive Dynamic Walking 7 knees Powered joints …from simulation to reality… Non-SYmbolic AI Lecture 13

  12. Situated: a robot or human is always already in some situation, rather than observing from outside. Embodied: a robot or human is a perceiving body, rather than a disembodied intelligence that happens to have sensors. Two initial lessons -- cognition is Non-SYmbolic AI Lecture 13

  13. Dynamical Systems Non-SYmbolic AI Lecture 13

  14. Lecture Plan • Lec 12: Homeostasis – a new and simplified daisyworld model, with Rein Control. • Lec 12: Perception -- extending the underlying mathematical insight to a novel form of (en-)active perception. • Lec 13: Dynamics of Robots. • Representation – the rights and wrongs of Representation-language in the context of understanding perception. Non-SYmbolic AI Lecture 13

  15. Representations GOFAI people run courses in “Knowledge Representation” – how a computer programmer could store knowledge in a computer program. Fair enough. BUT GOFAI people also think that Perception in humans is doing the same sort of job – different sort of operating system in the brain, different language, but basically the same. What !@*&%!$!!! …warning, biased opinions here! Non-SYmbolic AI Lecture 13

  16. Cartesian Theatre Everybody denied that they were naïve enough to believe in terms of the Cartesian theatre. … but then everybody used language that bought straight into that metaphor. What was going on? Non-SYmbolic AI Lecture 13

  17. My position, and the response Since writing “Untimed and Misrepresented” in 1992, I have maintained that I have no internal representations in the brain – except in the trivial metaphorical sense that any causal functional explanation appeals to a homuncular metaphor. And clearly a homuncular metaphor cannot explain cognition-as-a-whole! My position met with incomprehension and disbelief – and misinterpretation. Eg wrongly interpreted as talking about mental representations, or as being anti-representational! Non-SYmbolic AI Lecture 13

  18. Why the incomprehension? 1. My opponents were stupid and confused. 2. As 1, but also they were (a) Using different definitions of “representation” and (b) Asking/answering different questions from me. 3. As 1 and 2, but also there are special reasons why there is confusion about just these issues. Representation is special. Non-SYmbolic AI Lecture 13

  19. Three classes of reasons • Enormous variety of working usages of “representation”, and reluctance to define. • Even more particular reasons arising from the nature of what an “explanation” is, and the role of representations in the homuncular metaphor. • Particular issues when dealing with “reciprocal causality” - and hence fundamental when looking at perception from an enactive perspective Non-SYmbolic AI Lecture 13

  20. Working usages of “Representation” • Everyday usage re-presentation: a picture of a cat is a re-presentation of a real cat. • A stand-in: A Member of Parliament represents his/her constituents. • The act of representing (as opposed to the image/picture etc) • A variable that correlates with another variable. • As 4, but also needing also some causal correlation. • As 5, but also implicitly using homunculi. • A representation needs a consumer. • A representation does not need a consumer. Non-SYmbolic AI Lecture 13

  21. … more … 9. Representations are in the brain. 10. Representations are in the mind. 11. Internal representations = mental representations 12. Representations are in the head, and I reserve the right to use head=brain or head=mind at will. 13. Reps are in the head when you imagine a cat, not when you see it 14. Reps are in the head both when you see and imagine a cat 15. To try and define representations is a mistake. 16. No need to define our usage of the term, because it is obvious. 17. … … … Non-SYmbolic AI Lecture 13

  22. Representations and Functional Explanations One common usage of the term representation (… I take to be the core usage) is that of communicating (or signalling) via symbols. “P uses Q to represent R to S” … and this is used metaphorically in the all-pervasive functional metaphor. Non-SYmbolic AI Lecture 13

  23. What is a Functional Explanation? Explanations are tools we use to explain the strange/complex/unfamiliar in terms of the familiar/unpuzzling. We explain atoms and electrons in terms of billiard balls. The role of an explanation is to stop you asking further questions. (Cf. the 4-year old child and the recursive “why?”) …And functional explanations buy into a homuncular metaphor, into the familiar social metaphor of people with goals, intentions, roles. [see “role” in previous sentence] Non-SYmbolic AI Lecture 13

  24. Homuncular Explanations – where they work We use them all the time – and it is not a bad thing! E.g.: “this neuron is talking to that neuron, etc.” We use this metaphor all the time, and it is right and proper that we use it to explain cars and thermostats and hearts and bits of brains etc etc. In this sense, representation-talk can be (trivially) appropriate – when I deny “internal representations in the brain” I am not denying the validity of any functional explanation of anything in the brain Non-SYmbolic AI Lecture 13

  25. Homuncular explanations – where they don’t work But there is one obvious place where they do not work – when trying to explain cognition-as-a-whole, or perception-as-a-whole. Explanations explain the strange in terms of the familiar – you can explain A in terms of B, but you cannot explain A in terms of A. This is the problem with the Cartesian Theatre Non-SYmbolic AI Lecture 13

  26. … now I understand … … why so many people (philosophers. Cognitive scientists, GOFAI types) were so reluctant to define clearly what they meant by “representations” in their explanations Because they are so familiar with them in everyday life, they treat them as a primitive with no need for further explanation – the billiard-ball of the physicist. The everyday currency of functional explanations. So I was right in the first place – if they use this to try to understand cognition and perception, they arestupid and confused! Non-SYmbolic AI Lecture 13

  27. Reciprocal Causality Enactive perception, and a dynamical systems approach to cognition, assumes essential feedback loops everywhere – both within and outside the organism. If you look at one small section of this, you can talk about a chain of cause an effect, use a functional explanation (with the accompanying baggage). But an explanation of the whole will be different. Cf, analyses in terms of equilibria, the Daisyworld scenario. Non-SYmbolic AI Lecture 13

  28. Soviet incomprehension of “the market” How do western markets operate? The newspaper states today’s price for grain – who decided that if it wasn’t a central committee? What do you mean – “nobody decided it or everybody decided it or market forces decided it” – that makes no sense. Tell me who it really was! Where is this market-force? Same question, same incomprehension, with internal representations. Non-SYmbolic AI Lecture 13

  29. The Rights and Wrongs of Representations Internal Representations – trivially right if you mean you can give a functional explanation of a bit of some causal chain, just as with thermostats and window latches. Metaphorical sense, just fine as long as you realise what you are doing. Importantly wrong if you are trying to give the big picture on perception or cognition “Representing” is something crucially important to humans – differentiates us from the beasts! To explain something any animal can do (perception) in terms of something only humans can do is seriously confused. Non-SYmbolic AI Lecture 13

  30. Health Warning Repeat of warning given earlier: I am presenting one viewpoint, one side in an ideological argument. You will find many GOFAI books and lecturers give the other side of the argument, as if it was gospel. This is the sort of question you will have to decide for yourselves, rather than take what any book or lecturer tells you as the ultimate unexamined truth. Non-SYmbolic AI Lecture 13

  31. Talk Plan • Homeostasis – a new and simplified daisyworld model, with Rein Control. • Perception -- extending the underlying mathematical insight to a novel form of (en-)active perception. • Representation – the rights and wrongs of Representation-language in the context of understanding perception. … so to sum up … Non-SYmbolic AI Lecture 13

  32. The Big Picture “What is a an Object, that a Creature may perceive it, and a Creature, that it may perceive an Object” All organisms react to perturbations here and now Some organisms engage with objects distant in space/events distant in time Some organisms engage in a social world “Static Vegetables”, Homeostasis “Mobile Animals”, Perception “Social Humans”, Representation Non-SYmbolic AI Lecture 13

  33. To End What is a Representation, that a Person may understand it, and a Person, that s/he may understand a Representation? A really important question, but nothing to do with everyday perception. Non-SYmbolic AI Lecture 13

More Related