1 / 22

Four Paths to AI

Four Paths to AI. Dr. Jonathan Connell IBM T.J. Watson Research Center Prof. Kenneth Livingston Cognitive Science Program, Vassar College. Outline. What are the major approaches Some examples of each Utility of this classification scheme. How to achieve AI ?. Four broad approaches:

Download Presentation

Four Paths to AI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Four Paths to AI Dr. Jonathan Connell IBM T.J. Watson Research Center Prof. Kenneth Livingston Cognitive Science Program,Vassar College

  2. Outline • What are the major approaches • Some examples of each • Utility of this classification scheme

  3. How to achieve AI ? Four broad approaches: • Silver bullets – Just add missing mechanism X ! • Core Values – Just make sure it can do X ! • Emergence – Just add more X ! • Emulation – Just faithfully copy X !

  4. Silver Bullets Most of the necessary technology is already in place. But some critical special-purpose mechanism is missing. What is this piece? Caveat: Maybe the wrong “hole” was picked to fill. A silver bullet will kill a werewolf, but not a vampire.

  5. Some silver bullets • Fancy logic The idea is that while first-order logic may be inadequate, AI can be solved by moving to some more complex formal system of symbol manipulation. Techniques include various extension of logic (e.g. second-order, nonmonotonic, epistemic, deontic, modal, etc.) as well as other mechanisms like circumscription and inductive logic programming. • Embodiment The argument is that you cannot achieve human-like intelligence unless the system has a body and can interact with the real physical world. Being embodied makes you care about objects, space, uncertainty, and actions to get tasks accomplished. • Quantum physics Maybe we have failed to achieve AI simply because we do not have the right sort of hardware. The argument here is that there is some randomness factor or quantum susceptibility that is essential for consciousness (or perhaps "soul").

  6. Example: Saffioti’s fuzzy logic for robots (1998) • can create fancy control surface from simple (verbal) rules IF Target-Left & ~Out-of-Reach THEN Turn-Left(8) IF Target-Right & ~Out-of-Reach THEN Turn-Right(8) IF Target-Left & Out-of-Reach THEN Turn-Right(4) IF Target-Right & Out-of-Reach THEN Turn-Left(4) IF Target-Ahead THEN Go-Straight Inexact Reasoning The premise here is that formal symbol manipulation, like first-order logic, is too brittle for the real world. Things are not black-and-white but rather shades of gray, and AI systems need to be able to reason in this manner. • automatically accommodates inexact sensor information • position of obstacles not precisely known (especially far away) • useful for arbitration of rule / control suggestions • blending goal attraction with wall avoidance • is an object an obstacle or a target to be grasped?

  7. Deep Language An AI cannot be expected to be fully competent straight “out of the box”, instead it needs to learn from sympathetic humans and/or from reading written material. To do this it must have a deep understanding of human language. Example: Dyer’s BORIS program (1983) • processes text at multiple levels of understanding • lexical / syntactic level concerned with the meaning of actual words (resolves polysemy) • semantic level of who was the actor, what object was changed, what event occurred, etc. • episodic level of what happened to whom where and when over time (personal history) • thematic level of why people did certain actions and how they felt about it all • special data structures for higher-level concepts (beyond object / event) • scenes – persistent spatial / temporal loci for various events • scripts – stereotypical sequences of events associated with objects or locations • MOPs – standard patterns of interactions (e.g. what “borrowing” means) • AFFECT links – how actions influence agent emotions and vice versa • TAUs – characterization of goal-achievement status (e.g. “a stitch in time”)

  8. BORIS demo Richard hadn't heard from his college roommate Paul for years. Richard had borrowed money from Paul which was never paid back, but now he had no idea where to find his old friend. When a letter finally arrived from San Francisco, Richard was anxious to find out how Paul was. > What happened to Richard at home? Richard got a letter from Paul. > Who is Paul? Richard's friend. > Did Richard want to see Paul? Yes, Richard wanted to know how Paul was. > Had Paul helped Richard? Yes, Paul lent money to Richard. > Why didn't Richard pay Paul back? Richard did not know where Paul was. > How did Richard feel when the letter appeared? Richard felt glad because Paul and he were friends.

  9. Core Values Don’t focus on particular mechanisms. Should work backwards from desired properties instead. What are these? Caveat: Heavy dependencies if wrong core theory. A fever can be cured by blood-letting (draining the excess humour).

  10. Desired system properties • Situatedness The reason none of our systems have achieved AI is they are not part of the real world – they do not have a gut feel for how interactions occur nor do they have a real stake in the outcome. This topic is concerned with the motivational structure of systems as well as understanding physical and temporal processes. • Hierarchy & Recursion Many animal-like systems can learn to recognize a certain stimulus or perform a particular task. Where they seem to break down is the "reification" step, where the learned recognition or behavior can be used as an atomic module of a larger learned unit. Humans are very good at this sort of “abstraction” and the ability to do this recursively marks a qualitative distinction between them and all other creatures. • Self-Awareness As a part of consciousness it is important to be able to recursively access information about one's own states. This gives a sense of a unitary self who deliberately chooses action and reaps their consequence. It also forms the basis for understanding other agents with respect to prediction, imitation, or empathy.

  11. Emotion & Motivation Here the reasoning goes that emotion is not just a vestigial animal left-over or a mere a by-product of cognition, it is instead an essential ingredient. Emotion is crucial to regulating attention, mediating memory formation and retrieval, and arbitrating between actions in uncertain circumstances. • Example:Exploration policies for learning (Wilson 96) Fixed policies: • Explore a small fraction of the time, exploit learned function most of the time • Explore a lot in the beginning then progressively less over time • Explore until the expected optimal performance is achieved State-based policies: • Explore more if predictions are failing, less if everything is okay (confusion) • Explore less if not learning much, more if a lot of updating (excitement) • Explore only when no higher-payoffs activities are possible (boredom)

  12. Emergence All the essentials are already present and understood. It is just a matter of scaling the system up sufficiently. What fodder needs to be provided? Caveat: Beware of early asymptotes. A battery will make a frog’s leg twitch, so what will a lightning bolt do?

  13. Some forms of emergence • Axiomatization Classical first order logic underpins all human thought. It is a mere matter of identifying and formally codifying all the specific forms of reasoning and then writing the correct axioms for time, space, gravity, emotion, economics, social obligation, self-awareness, etc. • Evolution (GA) This approach posits that the key to AI is self-improving systems. Even if the incremental steps are very small, as long as there is no theoretical bound then the system should be able to bootstrap its way to human-level performance (and beyond!). We just need lots of individuals and generations. • Integration A human is not just a brain in a box, it has eyes, ears, arms, legs, etc. How can an AI ever truly appreciate the meaning of a word like “red” without grounding it in some bodily sensation? We need to put everything we know how to do together in one place and let this creature experience the real physical world.

  14. Commonsense This point of view says that we simply need to have the system understand the million or so facts that cover everyday existence. Most reasoning can then be based either directly on this pre-existing corpus, or on minimal extensions through analogy. Example:“Fred told the waiter he wanted some chips.” Background knowledge: • The word “he” means Fred – and not the waiter. • This event took place in a restaurant. • Fred was a customer dining there. • Fred and the waiter were a few feet apart. • The waiter was at working there, waiting on Fred at that time. • Fred wants potato chips, not wood chips – but he does not want some particular set of chips. • Both Fred and the waiter are live human beings. • Fred accomplished this by speaking words to the waiter. • Both of them speak the same language. • Both were old enough to talk, and the waiter was old enough to work. • Fred is hungry. • He wants and expects that in a few minutes the waiter will bring him a typical portion. • Fred will start eating soon after he gets them.

  15. Example: Lin’s video game player (1990) • simple 25x25 cell world (like PacMan) • moving items: I = agent, E = enemy • immobile items: O = obstacle, $ = food • energy indicator at bottom (H’s) • control system • only local view of environment • at each step pick one of 4 directions • tires with each step, bonus for getting food • dies if collides with an enemy or starves Learning (RL) It is too hard (or even impossible) to program a creature to react appropriately in all situations. A more robust and flexible approach is to provide guidance about what are good situations versus bad ones and let it learn how to respond itself. All it needs is many time steps of experience in successively less sheltered environments. • ultimate performance: • learns how to move safely to collect 85% of food • takes 150 trials of about 100 steps (15K steps!)

  16. Emulation Preconceptions about mechanisms are all likely wrong. Existence proofs should be copied instead. What system to emulate and at what level? Caveat: Might be copying irrelevant details Artificial feathers are not needed for flying.

  17. Various emulation levels • Neural simulation All our computer metaphors for the brain may be entirely wrong. We need to simulate, as accurately as possible, the actual neural hardware and see how it responds to various stimuli. Without this detailed modeling we may completely miss key aspects of how humans function. • Neural networks The human mind presumably is a program that runs on hardware comprised of the human brain. However brains are organized very differently than standard digital computes, so perhaps starting with more a biologically-faithful substrate will make the AI problem easier. • Human development How can we expect AI's to spring forth fully competent in several hours when infant development occurs over the course of many years? A lot is known about the various cognitive stages children progress through and it has been suggested that potential AI's follow this same developmental program.

  18. Example: Saksida’s recycling robot (1998) • uses dog “clicker training” • starts with innate behavioral pattern look around → see object → approach object → grab object • each transition has sensor preconditions color of object, size of object, how far robot has traveled, etc. • reinforcement adjusts preconditions changes mean values, tolerances, and importance • “shaping” gradually molds innate behavior into desired reward as soon as approach, move object further away each time Animal models Arguably humans evolved from “lower” animals and genetically the difference is quite small. This suggests that many of the mechanisms and behaviors present in animals underlie human intelligence and that the robust substrate provided by this heritage may be essential for cognition as we understand it. • can quickly be taught many variants of the task • follow trainer, recycle only green toys, play fetch, etc.

  19. Example: Breazeal’s Kismet (2002) • Global emotional variables • valence, arousal, and stance (approach / avoid) • directly reflected in expression & ears • Behaviors regulate interaction • seek, maintain, escape particular stimuli • also influenced by affective state Sociality An AI cannot be expected to be fully competent straight “out of the box”, instead it needs to learn from sympathetic humans and from cooperation with other robots. To do this it needs to understand how to effectively participate in social interactions such as advice taking, negotiation, and collaboration. • Interacts with user like a child • moves head, changes expression, babbles • sees faces, color, motion; hears tone of voice • wants prolonged interaction at right level of stimulation

  20. What good is this classification scheme? Commonalities within approaches: • Silver Bullets • Is common base substrate symbolic, set-based, fuzzy, other? • Core Values • Verbal queries needed to evaluate performance? • Responses only available in social situations? • Emergence • Any way to determine amount of data needed based on other examples? • Can asymptote be estimated based on incremental performance gains? • Emulation • How can one determine if a model is “faithful enough”? • Any common principles: auto-encoders, entropy reduction, reinforcement?

  21. The ten thousand foot view 1. Helps defuse some arguments • Are you objecting to the technology or the methodology? • Pacifist vs. Hawk: • “A gun won’t solve your problems!” • “If you are going to fight, at least a gun is a reasonable weapon.” • 2. Highlights cross-methodology issues • necessity for feedback • primacy of language • …

  22. Final thoughts Solution to the AI problem might be a combination: • Core Value&Silver Bullet • A self-motivated language learner • Emulation&Emergence • A neural net trained with Web data • Others …

More Related