1 / 51

Ontologies of Information Structure and Commonsense Psychology

This book explores the use of ontologies in understanding information structure and commonsense psychology. It covers topics such as diagrams, graphs, maps, photographs, and videos as effective ways to answer questions. It also discusses various concepts and propositions related to grounding symbols in cognition, intention and convention in communication, mutual belief, symbol systems, speech and text interpretation, maps, diagrams, and more.

sages
Download Presentation

Ontologies of Information Structure and Commonsense Psychology

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ontologies of Information Structure andCommonsense Psychology Jerry R. Hobbs USC/ISI Marina del Rey, CA

  2. Information Structure

  3. Motivation The best answer to a question is often a diagram, a graph, a map, a photograph, a video. What is the Krebs cycle? How has the average height of adult American males varied over the years? How did the Native Americans get to America? What does Silvio Berlusconi look like? What happened on September 11, 2001?

  4. Grounding Symbols in Cognition cause(perceive(a,x), cognize(a,c)) object state event process absence ... concept (including propositions)

  5. Grounding Symbols in Cognition cause(perceive(a,smoke), cognize(a,fire)) cause(perceive(a,cloud), cognize(a,dog)) cause(perceive(a,bell), cognize(a,food)) No necessary causal connection between x and c This schema makes symbols possible cause(perceive(a,x), cognize(a,concept-of(x))) but other things as well

  6. Grounding Symbols in Cognition cause(present(b,x,a), perceive(a,x)) cause(perceive(a,x), cognize(a,c)) cause(car beeps, driver hears beep) cause(driver hears beep, driver remembers seat belt)

  7. Intention and Convention in Communication Unintentional: fidget --> nervous “ouch” --> pain Presenter intends concept, but not recognition of intent: my door is closed --> I’m not in

  8. Recognizing Intent know(b, cause(present(b,x,a), cognize(a,c))) goal(b,cognize(a,c)) goal(b,g1) & know(b, cause(g2,g1)) & etc --> goal(b,g2) So b has goal present(b,x,a), an executable action, so he does it. a looks for causal explanation of present(b,x,a) and comes up with exactly this Intention is recognized.

  9. Gricean Nonnatural Meaning If an agent b has a goal g1 and g2 tends to cause g1, then b may have as a goal that g2 cause g1. If an agent b has as a goal that g2 cause g1, then b has the goal g2. When a recognizes this plan, he will recognize not only b’s goal to have a cognize c, but also b’s intention that a do so by virtue of the causal relation between b’s presenting x and a’s cognizing c.

  10. Mutual Belief and Convention Mutual Belief: mb(s,p) & member(a,s) --> believe(a,p) mb(s,p) --> mb(s,mb(s,p)) Structure of a communicative convention: mb(s, cause(present(b,x,a), cognize(a,c))) where member(a,s) and member(b,s) e.g. red flag with white diagonal in community of boaters means “diver below” represent(x,c,s) symbol content

  11. Composition in Symbol Systems Symbol System Content Domain composite symbol complex concept/ proposition interpretation composition operation composition operation basic concept/ proposition atomic symbol interpretation

  12. Speech and Text(within sentences) Symbol System Content Domain sentences “a man works” complex proposition man(x) & work(x) interpretation concatenation pred-arg rels, conjunction basic propositions man(x), work(y) words “a” “man”, “works” interpretation

  13. Speech and Text(Discourse) Symbol System Content Domain augmented discourse meaning interpretation discourse concatenation coherence relations: causality, similarity, figure-ground sentence meanings sentences interpretation

  14. Tables R b a R(a,b) Spatial arrangement ==> predicate-argument relations

  15. Beeps in Car Atomic symbol: beep ==> something’s wrong Composite symbols: beep ..... beep ..... beep ..... ==> fasten seat belt If car is running: beep beep beep beep ==> door is still open If car is off: beep beep beep beep ==> lights are still on

  16. Maps Underlying regions of single color/pattern meaningful region ==> ==> icons entities overlay icons on field location of entities ==> ==> labels names icon and label adjacent ==> name of entity internal structure of icons categories of entities ==>

  17. Process Diagrams (Futrelle, 1999) adjacent groups w arrows between state transitions adjacent grouped icons states icons entities

  18. Documents (Scott & Powers, 2003) Title Body Adjacent Paragraphs (mod Page, Col breaks) Diagram near description Conveys content of body Main detailed content Read sequentially Coreference Similarly, Web pages, PowerPoint presentations, ...

  19. Face-to-Face Conversation Atomic elements: Speech, prosody Facial expression Gaze direction Body position Gestures w hands and arms Composition operators: Temporal adjacency Temporal synchrony Need to determine meaning/function of various behaviors

  20. Larger-Scale CommunicativePerformances Lectures w PowerPoint slides Plays Demos ....

  21. Coreference Two noun phrases Icon and label Same icon in two state groups in diagram Region of photo and noun phrase in caption and phrase in text Iconic gesture and phrase in speech Useful for image search by keywords

  22. Modalities and Media (Hovy & Arens, 1990; Allwood, 2002) Channels of perception: optical, acoustic, chemical, pressure, temperature, ... Greatest opportunities for composition Communication devices: Primary: speech, gesture Secondary: writing, drawing, telephones, videotape, computer terminals, .... Advantages and disadvantages of each e.g., visual: exploit 2-D structure to convey relations

  23. Manifestations of Symbolic Entities (Pease & Niles, 2001) We group together classes of symbolic entities sharing same content and call them first class entities. manifest(x1,x) & represent(x,c,s) --> represent(x1,c,s) (defeasibly -- if P is in content of x then defeasibly it is in content of x1) A particular performance of Hamlet A copy of that videotape The performance of Hamlet A videotape of that performance Hamlet the play The text of Hamlet An edition of Hamlet A copy of that edition

  24. CommonsensePsychology (work with Andrew Gordon, USC/ICT)

  25. Methodology (Gordon, 2000) Agents plan, so to discover what agents know, investigate strategies. Picked 10 planning domains: politics, warfare, personal relationships, artistic performance, sales, immunology, animal camoflage, ... Interviewed experts to learn strategies Resulted in 372 strategies Rewrote strategies in controlled vocabulary -- 988 terms Classified terms into 48 representational areas (space, time, ...); 18 general knowledge; 30 commonsense psychology Enrich each representational area by text mining Formalize

  26. Methodology (Gordon, 2000) Agents plan, so to discover what agents know, investigate strategies. Picked 10 planning domains: politics, warfare, personal relationships, artistic performance, sales, immunology, animal camoflage, ... Interviewed experts to learn strategies Resulted in 372 strategies Rewrote strategies in controlled vocabulary -- 988 terms Classified terms into 48 representational areas (space, time, ...); 18 general knowledge; 30 commonsense psychology Enrich each representational area by text mining Formalize Machiavelli Sun Tzu his wife

  27. Theories So Far Memory Knowledge Management Envisioning (Thinking) Goals and Planning

  28. Why is Memory Important? We plan to remember actions/information at the appropriate time. We are responsible for remembering. Why was Mary angry that John forgot her birthday? But forgetting is often a less serious breach than some other reason. Why didn’t you get me a present? I forgot it was your birthday. vs. I didn’t want to.

  29. Naive Model of Memory Focus of Attention concept store retrieve concept Memory If in memory, then it was stored

  30. Accessibility Concepts in memory have varying accessibility. concept-1 concept-2 concept-3 threshold Concepts not retrievable concept-4

  31. Associations and Accessibility concept-1 concept-0 concept-3 concept-2 concept-1 concept-3 concept-4 concept-2 concept-4

  32. Associations and Accessability Thinking of concepts makes associated concepts more accessible. This give agents partial control over memory retrieval. Technique of memorization: Rich associations.

  33. “Remember” and “Forget” in memory above accessibility threshold --> remember retrieve --> remember cause self to retrieve --> remember cause self to retrieve after some effort --> remember forget concept <--> concept drops below accessibility threshold

  34. Remembering for a Time We store concepts in memory until we need them and then forget them. Where did I park my car today? vs. Where did I park my car on January 4? We use memory to satisfy knowledge prerequisites for planned actions.

  35. Knowledge Management:Belief Reify agents and propositions: believe(a,p) Reasoning is possible inside belief: believe(a,p) & believe(a,p-->q) & etc --> believe(a,q) Perception causes belief (seeing is believing) Communication tends to cause belief BDI: We act in ways that maximize satisfaction of our goals, given our beliefs

  36. Graded Belief (Friedman & Halpern, 2001) 0 gb(a,p)  1 gb(a, p&q) min(gb(a,p), gb(a,q)) gb(a, p & [p-->q]) = gb(a,q) gb(a,p) = 1 <--> believe(a,p) The higher the graded belief, the more likely agent is to act on it

  37. Knowledge Domain Sentence = set of propositions + a claim king(x,France) & bald(x) Knowledge domain: Has a set of characteristic predicates Is a set of sentences all of whose claims have predicates that are in the characteristic set Expert: Agent is defeasibly an expert in a knowledge domain if agent knows sampling of facts in the knowledge domain (tests, inference from displays of knowledge) propositional content claim

  38. Mutual Belief mb(s,p) & member(a,s) --> believe(a,p) mb(s,p) --> mb(s,mb(s,p)) These rules are mutually believed Can show that if a knows b is a member of s and a knows s mutually believes p, then a knows that b believes p Inference of who knows what / who is an expert in what from membership in communities

  39. Causal Complex causal complex s causal-complex(s,e) e1 s, .... e1 e2 e3 e e4 .... effect When every event or state in s happens or holds, then e happens or holds. All eventualities in s are relevant. causally-involved(ei,e)

  40. Cause In a causal complex, some eventualities are distinguished as causes. Causes are the focus of planning, prediction, explanation, interpreting discourse (but not diagnosis) presumable power on finger in socket shock cause What is presumable depends on task, context, knowledge base, ....

  41. Envisioning (Thinking) Causal System e1 e9 e4 e7 e2 e6 e10 e5 e8 e3 e11 e’s are causally involved

  42. Envisioning e4 Contiguous causal systems e6 e5 e1 e4 e4 e7 e2 e6 e6 e5 e5 e8 e3

  43. Envisioning envisioned causal system slice e1 e9 e4 e7 e2 e6 e10 e5 e8 e3 e11 Agent has this in focus

  44. Envisioned Causal System Prediction Explanation e1 e9 e4 e7 e2 e6 e10 e5 e8 e3 e11 A sequence of envisioned contiguous causal systems

  45. Correspondence with Reality If the events and states in the ECS are believed, the ECS is the “current world understanding” Need an account of how graded belief is increased or decreased as predictions and explanations are verified or falsified.

  46. Goals and Planning Causal Knowledge: (e1,x)[p’(e1,x) --> (e2)[q’(e2,x) & cause(e1,e2)]] or, p causes q (e1,x)[p’(e1,x) --> (e2)[q’(e2,x) & cause(e1,e2)]] or, p enables q where enable(e1,e2) <--> cause(~e1,~e2) Planning Axioms: (a,e1,e2)[goal(a,e2) & cause(e1,e2) & etc --> goal(a,e1)] (a,e1,e2)[goal(a,e2) & enable(e1,e2) --> goal(a,e1)] subgoal(a,e1,e2)

  47. Goals and Planning Goals can be ... competitive adversarial auxilliary .....

  48. Collective Goals Groups can have goals: All agents in group mutually believe the group has the goal All agents have the individual goal that the group achieves its goal Must bottom out in individual agents’ actions Organizations are such collective plans made concrete; an agent’s role in an organization is the actions the agent carries out as a subgoal in the collective plan

  49. Where Do Goals Come From? A False Mystery Stipulate: goal(a, thrive(a)) All else is causal knowledge/beliefs about what causes thriving

  50. Goal Themes group of agents set of possible eventualities goal-theme(s,t) <--> ( a,e) [member(a,s) & member(e,t) & etc ---> goal(a,e)] From group membership, we can infer beliefs and goals and thus behavior (defeasibly) e.g., he’s a puritan / hedonist / geek / ....

More Related