1 / 18

CS 385 Fall 2006 Chapter 7

CS 385 Fall 2006 Chapter 7. Knowledge Representation 7.1.1, 7.1.5, 7.2. Knowledge Representation. What do we have to represent what we know? predicate calculus production systems What other representations do we have? mathematical objects (functions, matrices,...)

lola
Download Presentation

CS 385 Fall 2006 Chapter 7

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 385 Fall 2006Chapter 7 Knowledge Representation 7.1.1, 7.1.5, 7.2

  2. Knowledge Representation What do we have to represent what we know? • predicate calculus • production systems What other representations do we have? • mathematical objects (functions, matrices,...) • data structures (CS 132) • use-case diagrams (CS 310) • ER diagrams (CS 325) What more do we need? Distinction between • the representational scheme (predicate calculus) • the medium it is implemented in (PROLOG)

  3. Categories of Representational Schemes: Logical: • predicate calculus with PROLOG to implement • there are other logics Procedural: • production system • rule-based system (expert system, later) Network: • nodes are objects, arcs are relations (a map) Structured networks: • extensions to networks where each node is more complex This chapter: the last two.

  4. Logic versus Association Logical representations: • predicate calculus loses important real-world knowledge: • person(X) → breathes(X) missing a lot of meaning • OK to check grammar but not to interpret a sentence Associations (semantic net): • the meaning of an object needs to be defined in terms of a network of associations with other objects. lungs, oxygen,... • this alternate to logic is used by psychologists and linguists to characterize the nature of human understanding • an object is defined in terms of its associations with other objects • inheritance hierarchy (canary isa bird, associate properties with each).

  5. Figure 6.1: Semantic network developed by Collins and Quillian in their research on human information storage and response times (Harmon and King 1985).

  6. Figure 6.2:Network representation of properties of snow and ice.

  7. 7.1.5 Frames Networks allow for some inheritance, but not effective aggregation Networks can get big and messy Frames: an object has named slots with • values • procedures • links to other frames Slots • filled in as information becomes available • loosely correspond to relations in a conceptual graph Advantage: it is clearer the main object is a dog Easier to indicate hierarchies via inheritance (from animal) Accessories point to collar and bowl frames Procedural info can be attached e.g. how to calculate default values such as 4 legs. e.g. deductions based on marital status, number of children,...

  8. Figure 6.12:Part of a frame description of a hotel room “Specialization” indicates a pointer to a superclass.

  9. 7.3 Conceptual Graphs A particular representation for semantic nets • finite, connected, bipartite (2 types of nodes with vertices from 1 set to other) graphs • nodes are concepts ( boxes) or conceptual relations (ellipses) • each graph represents one concept • a bird flies • a dog has color brown • a child has a parent that is mother and parent father.

  10. Labeling Concept Nodes Concept node (box): can be labeled with type and referent. Referent can be nothing, name, marker or variable: barks dog • a dog barks • fido barks • a particular (unnamed) dog parks and bites • a dog bit its owner barks dog:fido barks dog:#1232 bites agent object dog:*X bites person object agent owns dog:*X

  11. Type Hierarchy Lattice • a partial ordering on a set of types. Equivalence relation: • reflexive: a = a • symmetric: a = b → b = a • transitive a = b and b = c → a = c Order: • reflexive and transitive • all elements are comparable Partial order • some elements are comparable • how would you relate human, student, parent, math student, cs student, anna, zahra, dog, fido

  12. ┬ (universal type) generalization human dog specialization student parent cs student anna zahra fido ┴ (absurd type)

  13. Operations to Create New Graphs copy: an exact copy restrict: nodes replaced by a node representing their specialization dog is a specialized animal Note we lost information about dog join: combines the two with the substitutions. May have redundant info simplify: removes it eats animal eats dog barks dog eats dog barks

  14. loses the bone • what if emma is a cat? • what if the two dogs are not the same? These are not sound inference rules, but often good enough for plausible, common sense reasoning. No guarantee the results are true

  15. 7.2.5 Propositional Nodes Defining relations between propositions: experiencer person: mary know object proposition barks dog:fido

  16. 7.2.6 Conceptual Graphs and Logic Rules: • Assign a variable to each generic concept: X ↔ dog • Assign a name to each individual concept: emma • Each concept node has a predicate with same name as in 2 and variables as in 1: dog(X). dog(emma) • Each n-ary conceptual relation (barks, bites) is an n-ary predicate whose name is the same as the relation and arguments correspond to the concept nodes linked to the relation • All variables are existentially quantified barks dog  X (dog(X)  barks(X)  bites(X) dog(emma)  barks(emma)  bites(emma) bites barks dog: emma bites

  17. 7.2.6 Conceptual Graphs and Logic agent X ↔ dog, Y ↔ bites, Z ↔ person  X, Y, Z (dog(X)  agent(X,Y) bites(Y)  object(Y, Z)  person(Z) easier but not following the algorithm:  X, Y (dog(X)  bites(X, Y)  person(Y)  X, Y, Z, W (dog(X)  agent1(X,Y) bites(Y)  object1(Y, Z)  person(Z)  agent2(Z,W) owns(W)  object2(W,X)  X, Y (dog(X)  person(Y)  owns(Y,X)  bites(X,Y) object dog bites person agent1 object1 dog:*X bites person object2 agent2 owns dog:*X

  18. 7.2.6 Conceptual Graphs and Logic negation: fido does not bark ¬(dog(fido)  bark(fido) dog(fido) → ¬ bark(fido) bark(fido) → ¬ dog(fido) dogs do not bark  X ¬(dog(X)  bark(X)) neg proposition barks dog:fido neg proposition barks dog

More Related