1 / 30

Computing with Abstract Neurons

Computing with Abstract Neurons. McCollough-Pitts Neurons were initially used to model pattern classification size = small AND shape = round AND color = green AND location = on_tree => unripe linking classified patterns to behavior size = large OR motion = approaching => move_away

crispin
Download Presentation

Computing with Abstract Neurons

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computing with Abstract Neurons • McCollough-Pitts Neurons were initially used to model • pattern classification • size = small AND shape = round AND color = green AND location = on_tree => unripe • linking classified patterns to behavior • size = large OR motion = approaching => move_away • size = small AND direction = above => move_above • McCollough-Pitts Neurons can compute logical functions. • AND, NOT, OR

  2. What are the drawbacks of each representation? Distributed vs Localist Rep’n

  3. What happens if you want to represent a group? How many persons can you represent with n bits? 2^n What happens if one neuron dies? How many persons can you represent with n bits? n Distributed vs Localist Rep’n

  4. Sparse Distributed Representation

  5. Natural Language Understanding • Natural Language Processing (NLP) is the overall category • Search, Machine Translation, Sentiment Analysis, etc. • Natural Language Understanding (NLU) ~ action without human intervention • Google Search vs. Google Car • Current Mainstream Approaches • Templates: Siri, Cortana, Google, Alexa (next slide) • Machine Learning • Natural Language Generation adds more complications • Habitability Problem • FCG – Luc Steels • Language Communication with Autonomous Systems (LCAS) • Focus on Action • Constrained domain of Autonomous System yields tractability

  6. Amazon Alexa Skills ~ Templates developer.amazon.com/alexa-skills-kit GetHoroscope what is the horoscope for { Sign} GetHoroscope what will be the horoscope for { Sign} be on {Date} GetHoroscope get me my horoscope … MatchSigndo {FirstSign} and {Second Sign } get along MatchSign what is the relationship between {FirstSign} and {Second Sign }

  7. Embodiment Of all of these fields, the learning of languages would be the most impressive, since it is the most human of these activities. This field, however, seems to depend rather too much on the sense organs and locomotion to be feasible. Alan Turing (Intelligent Machines,1948)

  8. Actionability in Integrated Cognitive Science 1. All living things act; acting is what living things do. 2. Natural selection constrains the fitness (utility) of these actions. 3. Actionability is an agent's assessment of the expected utility of an external or internal action. 4. Volition is the key concept; agents perform volitional as well as automatic actions. 5. This defines, but does claim to solve, actionability as a integrating issue for Cognitive Science. Learning can improve actionability estimates. 6. No answers are suggested for hard mind-body problems like subjective agency.  7. Actionability calculation often involves simulation of action and its consequences. Feldman JA(2016)Actionability and Simulation: No Representation without Communication. Front.Psychol.7:1204. doi:10.3389/fpsyg.2016.01204

  9. Introduction: NTL • NTL’s main tenets • direct neural realization • continuity of thought and language • both of which entail a commitment to parallel processing and spreading activation • importance of language communities • conventional beliefs, grammars • simulation semantics • language understanding involves some of the brain circuitry involved in perception, motion, and emotion • best-fit process • underlying learning, understanding, and production of language

  10. Basic Questions Addressed • How could our brain, a mass of chemical cells, produce language and thought? • How much can we know about our own experience? • How do we learn new concepts? • Does our language determine how we think? • Is language Innate? • How do children learn grammar? • Why make computational brain models of thought? • Will our robots understand us? • How did language evolve? • What is the nature of subjective experience?

  11. Cafe Simulation-based language understanding Utterance “Harry walked to the cafe.” Constructions Analysis Process General Knowledge Simulation Specification Schema Trajector Goal walk Harry cafe Belief State Simulation

  12. Ideas from Cognitive Linguistics • Embodied Semantics (Lakoff, Johnson, Sweetser, Talmy • Radial categories (Rosch 1973, 1978; Lakoff 1985) • mother: birth / adoptive / surrogate / genetic, … • Profiling (Langacker 1989, 1991; cf. Fillmore XX) • hypotenuse, buy/sell (Commercial Event frame) • Metaphor and metonymy (Lakoff & Johnson 1980, …) • ARGUMENT IS WAR, MORE IS UP • The ham sandwich wants his check. • Mental spaces (Fauconnier 1994) • The girl with blue eyes in the painting really has green eyes. • Conceptual blending (Fauconnier & Turner 2002, inter alia) • workaholic, information highway, fake guns • “Does the name Pavlov ring a bell?” (from a talk on ‘dognition’!)

  13. boundary bounded region Image schemas • Trajector / Landmark (asymmetric) • The bike is near the house • ? The house is near the bike • Boundary / Bounded Region • a bounded region has a closed boundary • Topological Relations • Separation, Contact, Overlap, Inclusion, Surround • Orientation • Vertical (up/down), Horizontal (left/right, front/back) • Absolute (E, S, W, N) LM TR

  14. Schema Formalism SCHEMA <name> SUBCASE OF <schema> EVOKES <schema> AS <local name> ROLES < self role name>: <role restriction> < self role name> <-> <role name> CONSTRAINTS <role name> <- <value> <role name> <-> <role name> <setting name> :: <role name> <-> <role name> <setting name> :: <predicate> | <predicate>

  15. A Simple Example SCHEMA hypotenuse SUBCASE OF line-segment EVOKES right-triangle AS rt ROLES Comment inherited from line-segment CONSTRAINTS SELF <-> rt.long-side

  16. Source-Path-Goal SCHEMA: spg ROLES: source: Place path: Directed Curve goal: Place trajector: Entity

  17. Translational Motion SCHEMA translational motion SUBCASE OF motion EVOKES spg AS s ROLES mover <-> s.trajector source <-> s.source goal <-> s.goal CONSTRAINTS before:: mover.location <-> source after:: mover.location <-> goal

  18. Cafe Simulation-based language understanding Utterance “Harry walked to the cafe.” Constructions Analysis Process General Knowledge Simulation Specification Schema Trajector Goal walk Harry cafe Belief State Simulation

  19. Simulation specification The analysis process produces a simulation specification that • includes image-schematic, motor control and conceptual structures • provides parameters for a mental simulation

  20. Simulation Semantics • BASIC ASSUMPTION: SAME REPRESENTATION FOR PLANNING AND SIMULATIVE INFERENCE • Evidence for common mechanisms for recognition and action (mirror neurons) in the F5 area (Rizzolatti et al (1996), Gallese 96, Boccino 2002) and from motor imagery (Jeannerod 1996) • IMPLEMENTATION: • x-schemas affect each other by enabling, disabling or modifying execution trajectories. Whenever the CONTROLLERschema makes a transition it may set, get, or modify stateleading to triggering or modificationof other x-schemas. State is completely distributed (a graph marking) over the network. • RESULT: INTERPRETATION IS IMAGINATIVE SIMULATION!

  21. walker at goal energy walker=Harry goal=home Active representations • Many inferences about actions derive from what we know about executing them • Representation based on stochastic Petri nets captures dynamic, parameterized nature of actions • Used for acting, recognition, planning, and language Walking: bound to a specific walker with a direction or goal consumes resources (e.g., energy) may have termination condition(e.g., walker at goal) ongoing, iterative action

  22. Learning Verb MeaningsDavid Bailey A model of children learning their first verbs. Assumes parent labels child’s actions. Child knows parameters of action, associates with word Program learns well enough to: 1) Label novel actions correctly 2) Obey commands using new words (simulation) System works across languages Mechanisms are neurally plausible.

  23. System Overview

  24. Learning Two Senses of PUSH Model merging based on Bayesian MDL

  25. Event Structure in LanguageSrini Narayanan • Fine-grained • Rich Notion of Contingency Relationships. • Phenomena: Aspect, Tense, Force-dynamics, Uncertainty, Modals, Counterfactuals • Event Structure Metaphor: • Phenomena: Abstract Actions are conceptualized as Motion and Manipulation • Schematic Inferences are preserved. • Aspect: ways languages describe the structure of events using a variety of lexical and grammatical devices.

  26. Task: Interpret simple discourse fragments/ blurbs France fell into recession. Pulled out by Germany US Economy on the verge of falling back into recession after moving forward on an anemic recovery. Indian Government stumbling in implementing Liberalization plan. Moving forward on all fronts, we are going to be ongoing and relentless as we tighten the net of justice. The Government is taking bold new steps. We are loosening the stranglehold on business, slashing tariffs and removing obstacles to international trade.

  27. Event Structure Metaphor • States are Locations • Changes are Movements • Causes are Forces • Causation is Forced Movement • Actions are Self-propelled Movements • Purposes are Destinations • Means are Paths • Difficulties are Impediments to Motion • External Events are Large, Moving Objects • Long-term, Purposeful Activities are Journeys

  28. Results • Model was implemented and tested on discourse fragments from a database of 50 newspaper stories in international economics from standard sources such as WSJ, NYT, and the Economist. • Results show that motion terms are often the most effective method to provide the following types of information about abstract plans and actions. • Information about uncertain events and dynamic changes in goals and resources. (sluggish, fall, off-track, no steam) • Information about evaluations of policies and economic actors and communicative intent (strangle-hold, bleed). • Communicating complex, context-sensitive and dynamic economic scenarios (stumble, slide, slippery slope). • Commincating complex event structure and aspectual information (on the verge of, sidestep, giant leap, small steps, ready, set out, back on track). • ALL THESE BINDINGS RESULT FROM REFLEX, AUTOMATIC INFERENCES PROVIDED BY X-SCHEMA BASED INFERENCES.

More Related