1 / 35

Machine Consciousness and Cognitive Robotics Two Research Agendas or One?

This text delves into the realms of consciousness, cognitive robotics, and the implications of merging engineering and philosophical perspectives. By examining perception, imagination, and abstract concepts, it sets the stage for understanding human-like intelligence in machines.

timf
Download Presentation

Machine Consciousness and Cognitive Robotics Two Research Agendas or One?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Machine Consciousness and Cognitive RoboticsTwo Research Agendas or One? Murray Shanahan Imperial College London Dept. Electrical Engineering

  2. Overview • Consciousness • Cognitive Robotics • Motivation and methodology • Ludwig: a humanoid robot • Perception as abduction • Philosophical Implications

  3. Expect Changing of Hats Engineer Philosopher

  4. As a Philosopher…

  5. “Consciousness” • The word “consciousness” seems to conflate several concepts. To be conscious is • To be aware of the physical and social world • To be aware of the self • To have an inner world & an internal monologue • To offer an emotional response to life • If we have a theory that accounts for these things, will the philosophical puzzle that it is “like something” to be a conscious creature disappear? • Such an account will at the same time be an explanation of how the brain generates intelligent behaviour • The engineer’s considerations may be pertinent

  6. A Blueprint for the Human Mind

  7. The Imagination is the Key • The Imagination is a multi-media, virtual reality “theatre”, where stories about the body and the self are played out. These stories • are influenced by the present situation according to Perception, • elicit a secondary response from Emotion, • and thereby influence the decisions taken by Action • The Imagination rehearses trajectories through an abstraction of sensory-motor space. • The inner loop of the Imagination mirrors the outer Action-World-Perception loop • The stories include visual images, sounds and plenty of conversation (the inner monologue)

  8. Seeing As • Perceiving an object (or situation or event) entails seeing it as a member of one or more categories • Seeing a collection of edges and surfaces as a rock or a deer • To see X as a Y is to see X as a potential participant in a whole repertoire of little stories in the Imagination involving Ys • Hunting a deer, cooking its meat, using its antlers • The power of the human imagination lies in its talent for projecting things onto novel categories • Seeing a horse as a means of transport • Seeing a rock as a hammer

  9. Little Spatial and Social Stories • All these stories are grounded in small infantile sensory-motor episodes (cf: Lakoff) • Putting things in containers / taking things out of containers • Following a path • Giving things to others / taking things from others • Abstract concepts built on these concrete foundations • For example, experiences relating to containment ground • Bank accounts • Variables in programming languages • Abstract concept of Self built around core representation of the body

  10. By the Way… • A full theory of mind also needs to account for “altered states of consciousness”, such as those experienced during • Dreams • Hallucinations • Meditation

  11. As an Engineer…

  12. What Do I Want to Do? • Build an artefact that has as near to human-level intelligence as possible • Do so in a principled way, by uncovering the fundamental laws that govern the design of such artefacts • If the results impact on psychology and/or philosophy, so much the better • But ultimately the project will stand or fall as an engineering exercise • Albeit engineering with a “philosophical edge”

  13. Motivating Conjecture 1Abstract Concepts are Grounded in Sensory-Motor Interaction with a World of Spatio-Temporally Located Objects • Consider the order in which locomotion, communication, manipulation, and language appear in evolutionary history • Consider their order of appearance in child development • Classical AI hasn’t delivered using top-down methodology • Symbols have to be grounded to have meaning • “The core of our conceptual systems is directly grounded in perception, body movement, and experience” (G.Lakoff)

  14. Motivating Conjecture 2The Ability to Handle Uncertainty and Incompleteness Is Central to Human-Level Cognition • Uncertainty is an unavoidable consequence of the limitations of any sensor • Incompleteness is an inevitable consequence of having a point-of-view, which is in turn a consequence of having a spatial location • Can’t see back and front of an object at the same time • In cluttered environment, occlusion is the norm • Most of the world is too distant to be known

  15. Motivating Conjecture 3Perception, Action and Cognition Form an Inseparable Trinity • Cognition is constrained by need for reactivity • Perception is intractable without help of cognition • Behaviour-based approaches are limited by lack of cognition • Evolutionary pressure behind development of cognition was the benefit of a better closed-loop control system • Fragmentation of AI spawned irreconcilable sub-disciplines that lost sight of the big challenge

  16. Two Design Principles Intelligent behaviour will result from correct reasoning with correct representation Adaptive behaviour is the product of closed-loop control • These align with the approaches of classical AI and biologically-inspired robotics (cybernetics) • To reconcile these insights, explicit representation and reasoning are inserted in a fast feedback loop • Use of explicit reasoning must not entail the deployment of computationally unbounded processes • Use of explicit representation must not entail the construction of a complete and accurate model of the world

  17. Methodological Conclusion • If we want to engineer an artefact with human-level cognitive skills, central topics include • Representing and reasoning about shape and space • Dealing with incompleteness and uncertainty • So let’s build a robot that perceives, reasons about, and acts upon objects having a variety of different shapes in a messy environment • The architecture should integrate explicit representation and explicit reasoning with closed-loop control

  18. The Present Challenge • Build a humanoid robot • Fill its workspace with everyday objects • Program it to attend to interesting objects given visual cues • Make it construct representations of objects from their appearance • Let it nudge interesting objects • Make the quality of its representations improve with the resulting new information

  19. Ah! Ludwig “What am I supposed to do with these things? All I know about is first-order predicate calculus!”

  20. Ludwig: A Humanoid Robot • Pan-and-tilt head • Stereo vision • Two articulated arms • Three degrees-of-freedom each • Upper torso only

  21. LUDWIG Nudging a Ball This is a simple demonstration of the basic mechanical, electrical, and control system working together

  22. A View of Robot Perception • The role of robot perception is to transform raw sensor data into a “correct” representation of the external world • The idea of correctness makes sense if the representations are symbolic and have a compositional semantics • To engineer our robot in a principled way, we need a formal account of how this is done • This account will explain how symbols and sentences denote, how they are grounded, how they refer, how they acquire intentionality • Finally we need an implementation that conforms to the theory

  23. Perception as Abduction • describes impact of world on sensors • Ddescribes object shapes & locations • Must be consistent & comprise abducible formulae • Need method for handling multiple Ds •  describes data from cameras, sonar, etc Background theory Sensor data Objects in the world

  24. The Character of S • S comprises first-order predicate calculus formulae describing • Generic laws about actions and change • Expressed in a formalism for describing actions, such as the event calculus • Must incorporate a solution to the frame problem • Generic laws about spatial occupancy, solidity, persistence, occlusion, etc • The effects of specific actions • If you move forwards your location changes in a certain way • If you rotate an object, its aspect changes in a certain way

  25. Desiderata for a Full Theory of Perception • An agent’s perceptual system should be tuned to those aspects of the world that are pertinent to its needs (J. von Uexküll’s Umwelten) • To perceive something should be to perceive its use (J. J. Gibson’s affordances) • Perception should be active: gathering new sensor data facilitates the interpretation of old sensor data • An attention mechanism should be used to manage the volume of sensor data while ensuring its utility • The flow of information between perception and cognition should be bidirectional

  26. The Visual Imagination • Symbolic, sentence-like representations are only half the story • Good for abstraction • Good for handling incompleteness and uncertainty • Topographic, image-like representations offer complementary features • Spatial properties come for free (shape, orientation, etc) • Animation can be used to simulate effects of action and change • Combining symbolic and topographic representations • Pixel arrays • Animated • Multiple levels of granularity • Regions of array labelled with abductively determined categories

  27. As a Philosopher…

  28. Engineering with a Philosophical Edge “Much of what we think will of course prove to be mistaken; that is one advantage of real experiments over thought experiments.” (Dennett on the Cog project)

  29. Philosophical ParallelsThe Philosopher and the Engineer • Intentionality • Propositional attitudes • Kantian idealism • Consciousness

  30. Intentionality The Philosopher “How is intentionality possible? How is it that states of mind can be about the world? How are symbols grounded?” The Engineer “A clear and well-defined causal story can be told about how sentences in Ludwig’s memory come to be there. The story starts with the state of the world, moves on to how this causes electrical signals in the robot’s sensors, and concludes with computational processes that transform these into representations. The abductive account spells out exactly the sense in which the resulting representations are correct.”

  31. Propositional Attitudes The Philosopher “What are propositional attitiudes? What are beliefs, desires, and intentions, and what part do they play in a (functionalist) theory of mind?” The Engineer “A precise causal story can be told about how representations in Ludwig’s memory arrive there, interact with each other, and issue in robot actions. The story involves well-defined computational processes of sensor data assimilation and planning through logical inference. (You don’t seriously think the human mind is like this do you?)”

  32. Kantian Idealism The Philosopher “The natural world as we know it ... is thoroughly conditioned by [certain] features: our experience is essentially experience of a spatio-temporal world of law-governed objects conceived of as distinct from our temporally successive experiences of them” (Strawson on Kant) The Engineer “Ludwig’s background theory  must contain axioms about time, action, change, space, and shape, and the hypotheses  that explain sensor data  are therefore constrained to be expressed in those terms.”

  33. Consciousness The Philosopher “But what is the nature of consciousness? How does it arise in physical matter?” The Engineer “What? Hmmm, well, let me take a step back…”

  34. Conclusion Machine consciousness and cognitive robotics: two research agendas or one? Two agendas with a very large area of overlap

  35. And Finally ... The Philosopher “Where’s the nearest pub?” The Engineer “Where’s the nearest pub?”

More Related