480 likes | 660 Views
Knowledge Representation and Reasoning. Focus on Sections 10.1-10.3, 10.6 Guest Lecturer: Eric Eaton University of Maryland Baltimore County Lockheed Martin Advanced Technology Laboratories. Adapted from slides by Tim Finin and Marie desJardins.
E N D
Knowledge Representation and Reasoning Focus on Sections 10.1-10.3, 10.6 Guest Lecturer: Eric Eaton University of Maryland Baltimore County Lockheed Martin Advanced Technology Laboratories Adapted from slides by Tim Finin and Marie desJardins. Some material adopted from notes by Andreas Geyer-Schulz, and Chuck Dyer.
Outline • Approaches to knowledge representation • Situation calculus • Deductive/logical methods • Forward-chaining production rule systems • Semantic networks • Frame-based systems • Description logics • Abductive/uncertain methods • What’s abduction? • Why do we need uncertainty? • Bayesian reasoning • Other methods: Default reasoning, rule-based methods, Dempster-Shafer theory, fuzzy reasoning
Introduction • Real knowledge representation and reasoning systems come in several major varieties. • These differ in their intended use, expressivity, features,… • Some major families are • Logic programming languages • Theorem provers • Rule-based or production systems • Semantic networks • Frame-based representation languages • Databases (deductive, relational, object-oriented, etc.) • Constraint reasoning systems • Description logics • Bayesian networks • Evidential reasoning
Ontological Engineering • Structuring knowledge in a useful fashion • An ontology formally represents concepts in a domain and relationships between those concepts • Using the proper representation is key! • It can be the difference between success and failure • Often costly to formally engineer domain knowledge • Domain experts (a.k.a. subject matter experts) • Commercial ontology, e.g. Cyc (http://www.cyc.com/, http://opencyc.org/)
Representing change • Representing change in the world in logic can be tricky. • One way is just to change the KB • Add and delete sentences from the KB to reflect changes • How do we remember the past, or reason about changes? • Situation calculus is another way • A situation is a snapshot of the world at some instant in time • When the agent performs an action A in situation S1, the result is a new situation S2.
Situation calculus • A situation is a snapshot of the world at an interval of time during which nothing changes • Every true or false statement is made with respect to a particular situation. • Add situation variables to every predicate. • at(hunter,1,1) becomes at(hunter,1,1,s0): at(hunter,1,1) is true in situation (i.e., state) s0. • Alternatively, add a special 2nd-order predicate, holds(f,s), that means “f is true in situation s.” E.g., holds(at(hunter,1,1),s0) • Add a new function, result(a,s), that maps a situation s into a new situation as a result of performing action a. For example, result(forward, s) is a function that returns the successor state (situation) to s • Example: The action agent-walks-to-location-y could be represented by • (x)(y)(s) (at(Agent,x,s) ^ ~onbox(s)) -> at(Agent,y,result(walk(y),s))
Deducing hidden properties • From the perceptual information we obtain in situations, we can infer properties of locations l,s at(Agent,l,s) ^ Breeze(s) => Breezy(l) l,s at(Agent,l,s) ^ Stench(s) => Smelly(l) • Neither Breezy nor Smelly need situation arguments because pits and Wumpuses do not move around
Deducing hidden properties II • Why both causal and diagnostic rules? Maybe diagnostic rules are enough? However, it is very tricky to ensure that they derive the strongest possible conclusions from the available information. • For example, the absence of stench or breeze implies that adjacent squares are OK: ( x,y,g,u,c,s) Percept([None,None,g,u,c],t) ^ At(Agent,x,s) ^ Adjacent(x,y) => OK(y) • but sometimes a square can be OK even when smells and breezes abound. Consider the following model-based rule: (x,t) ( t(Wumpus,x,t) ^ Pit(x)) <=> OK(x) • If the axioms correctly and completely describe the way the world works and the way percepts are produced, the inference procedure will correctly infer the strongest possible description of the world state given the available percepts.
Deducing hidden properties II • We need to write some rules that relate various aspects of a single world state (as opposed to across states) • There are two main kinds of such rules: • Causal rules reflect the assumed direction of causality in the world: (Al1,l2,s) At(Wumpus,l1,s) ^ Adjacent(l1,l2) => Smelly(l2) (A l1,l2,s) At(Pit,l1,s) ^ Adjacent(l1,l2) => Breezy(l2) Systems that reason with causal rules are called model-based reasoning systems • Diagnostic rules infer the presence of hidden properties directly from the percept-derived information. We have already seen two diagnostic rules: (A l,s) At(Agent,l,s) ^ Breeze(s) => Breezy(l) (A l,s) At(Agent,l,s) ^ Stench(s) => Smelly(l)
Representing change:The frame problem • Frame axiom: If property x doesn’t change as a result of applying action a in state s, then it stays the same. • On (x, z, s) Clear (x, s) On (x, table, Result(Move(x, table), s)) On(x, z, Result (Move (x, table), s)) • On (y, z, s) y x On (y, z, Result (Move (x, table), s)) • The proliferation of frame axioms becomes very cumbersome in complex domains
The frame problem II • Successor-state axiom: General statement that characterizes every way in which a particular predicate can become true: • Either it can be madetrue, or it can already be true and not be changed: • On (x, table, Result(a,s)) [On (x, z, s) Clear (x, s) a = Move(x, table)] [On (x, table, s) a Move (x, z)] • In complex worlds, where you want to reason about longer chains of action, even these types of axioms are too cumbersome • Planning systems use special-purpose inference methods to reason about the expected state of the world at any point in time during a multi-step plan
Qualification problem • Qualification problem: • How can you possibly characterize every single effect of an action, or every single exception that might occur? • When I put my bread into the toaster, and push the button, it will become toasted after two minutes, unless… • The toaster is broken, or… • The power is out, or… • I blow a fuse, or… • A neutron bomb explodes nearby and fries all electrical components, or… • A meteor strikes the earth, and the world we know it ceases to exist, or…
Ramification problem • Similarly, it’s just about impossible to characterize every side effect of every action, at every possible level of detail: • When I put my bread into the toaster, and push the button, the bread will become toasted after two minutes, and… • The crumbs that fall off the bread onto the bottom of the toaster over tray will also become toasted, and… • Some of the aforementioned crumbs will become burnt, and… • The outside molecules of the bread will become “toasted,” and… • The inside molecules of the bread will remain more “breadlike,” and… • The toasting process will release a small amount of humidity into the air because of evaporation, and… • The heating elements will become a tiny fraction more likely to burn out the next time I use the toaster, and… • The electricity meter in the house will move up slightly, and…
Knowledge engineering! • Modeling the “right” conditions and the “right” effects at the “right” level of abstraction is very difficult • Knowledge engineering (creating and maintaining knowledge bases for intelligent reasoning) is an entire field of investigation • Many researchers hope that automated knowledge acquisition and machine learning tools can fill the gap: • Our intelligent systems should be able to learn about the conditions and effects, just like we do! • Our intelligent systems should be able to learn when to pay attention to, or reason about, certain aspects of processes, depending on the context!
Preferences among actions • A problem with the Wumpus world knowledge base that we have built so far is that it is difficult to decide which action is best among a number of possibilities. • For example, to decide between a forward and a grab, axioms describing when it is OK to move to a square would have to mention glitter. • This is not modular! • We can solve this problem by separating facts about actions from facts about goals. This way our agent can be reprogrammed just by asking it to achieve different goals.
Preferences among actions • The first step is to describe the desirability of actions independent of each other. • In doing this we will use a simple scale: actions can be Great, Good, Medium, Risky, or Deadly. • Obviously, the agent should always do the best action it can find: (a,s) Great(a,s) => Action(a,s) (a,s) Good(a,s) ^ ~(b) Great(b,s) => Action(a,s) ( a,s) Medium(a,s) ^ (~(b) Great(b,s) v Good(b,s)) => Action(a,s) ...
Preferences among actions • We use this action quality scale in the following way. • Until it finds the gold, the basic strategy for our agent is: • Great actions include picking up the gold when found and climbing out of the cave with the gold. • Good actions include moving to a square that’s OK and hasn't been visited yet. • Medium actions include moving to a square that is OK and has already been visited. • Risky actions include moving to a square that is not known to be deadly or OK. • Deadly actions are moving into a square that is known to have a pit or a Wumpus.
Goal-based agents • Once the gold is found, it is necessary to change strategies. So now we need a new set of action values. • We could encode this as a rule: • (s) Holding(Gold,s) => GoalLocation([1,1]),s) • We must now decide how the agent will work out a sequence of actions to accomplish the goal. • Three possible approaches are: • Inference: good versus wasteful solutions • Search: make a problem with operators and set of states • Planning: to be discussed later
Semantic Networks • A semantic network is a simple representation scheme that uses a graph of labeled nodes and labeled, directed arcs to encode knowledge. • Usually used to represent static, taxonomic, concept dictionaries • Semantic networks are typically used with a special set of accessing procedures that perform “reasoning” • e.g., inheritance of values and relationships • Semantic networks were very popular in the ‘60s and ‘70s but are less frequently used today. • Often much less expressive than other KR formalisms • The graphical depiction associated with a semantic network is a significant reason for their popularity.
Nodes and Arcs • Arcs define binary relationships that hold between objects denoted by the nodes. mother age Sue john 5 wife age father husband mother(john,sue) age(john,5) wife(sue,max) age(max,34) ... 34 Max age
Animal isa hasPart Bird isa Wing Robin isa isa Rusty Red Semantic Networks • The ISA (is-a) or AKO (a-kind-of) relation is often used to link instances to classes, classes to superclasses • Some links (e.g. hasPart) are inherited along ISA paths. • The semantics of a semantic net can be relatively informal or very formal • often defined at the implementation level
Reification • Non-binary relationships can be represented by “turning the relationship into an object” • This is an example of what logicians call “reification” • reify v : consider an abstract concept to be real • We might want to represent the generic give event as a relation involving three things: a giver, a recipient and an object, give(john,mary,book32) giver john give recipient object mary book32
Individuals and Classes Genus • Many semantic networks distinguish • nodes representing individuals and those representing classes • the “subclass” relation from the “instance-of” relation Animal instance subclass hasPart Bird subclass Wing Robin instance instance Rusty Red
Inference by Inheritance • One of the main kinds of reasoning done in a semantic net is the inheritance of values along the subclass and instance links. • Semantic networks differ in how they handle the case of inheriting multiple different values. • All possible values are inherited, or • Only the “lowest” value or values are inherited
Multiple inheritance • A node can have any number of superclasses that contain it, enabling a node to inherit properties from multiple “parent” nodes and their ancestors in the network. • These rules are often used to determine inheritance in such “tangled” networks where multiple inheritance is allowed: • If X<A<B and both A and B have property P, then X inherits A’s property. • If X<A and X<B but neither A<B nor B<Z, and A and B have property P with different and inconsistent values, then X does not inherit property P at all.
Nixon Diamond • This was the classic example circa 1980. Person subclass subclass pacifist pacifist Quaker Republican FALSE TRUE instance instance Person
From Semantic Nets to Frames • Semantic networks morphed into Frame Representation Languages in the ‘70s and ‘80s. • A frame is a lot like the notion of an object in OOP, but has more meta-data. • A frame has a set of slots. • A slot represents a relation to another frame (or value). • A slot has one or more facets. • A facet represents some aspect of the relation.
Facets • A slot in a frame holds more than a value. • Other facets might include: • current fillers (e.g., values) • default fillers • minimum and maximum number of fillers • type restriction on fillers (usually expressed as another frame object) • attached procedures (if-needed, if-added, if-removed) • salience measure • attached constraints or axioms • In some systems, the slots themselves are instances of frames.
Description Logics • Description logics provide a family of frame-like KR systems with a formal semantics. • E.g., KL-ONE, LOOM, Classic, … • An additional kind of inference done by these systems is automatic classification • finding the right place in a hierarchy of objects for a new description • Current systems take care to keep the languages simple, so that all inference can be done in polynomial time (in the number of objects) • ensuring tractability of inference
Abduction • Abduction is a reasoning process that tries to form plausible explanations for abnormal observations • Abduction is distinctly different from deduction and induction • Abduction is inherently uncertain • Uncertainty is an important issue in abductive reasoning • Some major formalisms for representing and reasoning about uncertainty • Mycin’s certainty factors (an early representative) • Probability theory (esp. Bayesian belief networks) • Dempster-Shafer theory • Fuzzy logic • Truth maintenance systems • Nonmonotonic reasoning
Abduction • Definition (Encyclopedia Britannica): reasoning that derives an explanatory hypothesis from a given set of facts • The inference result is a hypothesisthat, if true, could explain the occurrence of the given facts • Examples • Dendral, an expert system to construct 3D structure of chemical compounds • Fact: mass spectrometer data of the compound and its chemical formula • KB: chemistry, esp. strength of different types of bounds • Reasoning: form a hypothetical 3D structure that satisfies the chemical formula, and that would most likely produce the given mass spectrum
Abduction examples (cont.) • Medical diagnosis • Facts: symptoms, lab test results, and other observed findings (called manifestations) • KB: causal associations between diseases and manifestations • Reasoning: one or more diseases whose presence would causally explain the occurrence of the given manifestations • Many other reasoning processes (e.g., word sense disambiguation in natural language process, image understanding, criminal investigation) can also been seen as abductive reasoning
Comparing abduction, deduction, and induction A => B A --------- B Deduction: major premise: All balls in the box are black minor premise: These balls are from the box conclusion: These balls are black Abduction: rule: All balls in the box are black observation: These balls are black explanation: These balls are from the box Induction: case: These balls are from the box observation: These balls are black hypothesized rule: All ball in the box are black A => B B ------------- Possibly A Whenever A then B ------------- Possibly A => B Deductionreasons from causes to effects Abduction reasons from effects to causes Induction reasons from specific cases to general rules
Characteristics of abductive reasoning • “Conclusions” are hypotheses, not theorems (may be false even if rules and facts are true) • E.g., misdiagnosis in medicine • There may be multiple plausible hypotheses • Given rules A => B and C => B, and fact B, both A and C are plausible hypotheses • Abduction is inherently uncertain • Hypotheses can be ranked by their plausibility (if it can be determined)
Characteristics of abductive reasoning (cont.) • Reasoning is often a hypothesize-and-test cycle • Hypothesize: Postulate possible hypotheses, any of which would explain the given facts (or at least most of the important facts) • Test: Test the plausibility of all or some of these hypotheses • One way to test a hypothesis H is to ask whether something that is currently unknown–but can be predicted from H–is actually true • If we also know A => D and C => E, then ask if D and E are true • If D is true and E is false, then hypothesis A becomes more plausible (support for A is increased; support for C is decreased)
Characteristics of abductive reasoning (cont.) • Reasoning is non-monotonic • That is, the plausibility of hypotheses can increase/decrease as new facts are collected • In contrast, deductive inference is monotonic: it never change a sentence’s truth value, once known • In abductive (and inductive) reasoning, some hypotheses may be discarded, and new ones formed, when new observations are made
Sources of uncertainty • Uncertain inputs • Missing data • Noisy data • Uncertain knowledge • Multiple causes lead to multiple effects • Incomplete enumeration of conditions or effects • Incomplete knowledge of causality in the domain • Probabilistic/stochastic effects • Uncertain outputs • Abduction and induction are inherently uncertain • Default reasoning, even in deductive fashion, is uncertain • Incomplete deductive inference may be uncertain Probabilistic reasoning only gives probabilistic results (summarizes uncertainty from various sources)
Decision making with uncertainty • Rational behavior: • For each possible action, identify the possible outcomes • Compute the probability of each outcome • Compute the utility of each outcome • Compute the probability-weighted (expected) utility over possible outcomes for each action • Select the action with the highest expected utility (principle of Maximum Expected Utility)
Bayesian reasoning • Probability theory • Bayesian inference • Use probability theory and information about independence • Reason diagnostically (from evidence (effects) to conclusions (causes)) or causally (from causes to effects) • Bayesian networks • Compact representation of probability distribution over a set of propositional random variables • Take advantage of independence relationships
Other uncertainty representations • Default reasoning • Nonmonotonic logic: Allow the retraction of default beliefs if they prove to be false • Rule-based methods • Certainty factors (Mycin): propagate simple models of belief through causal or diagnostic rules • Evidential reasoning • Dempster-Shafer theory: Bel(P) is a measure of the evidence for P; Bel(P) is a measure of the evidence against P; together they define a belief interval (lower and upper bounds on confidence) • Fuzzy reasoning • Fuzzy sets: How well does an object satisfy a vague property? • Fuzzy logic: “How true” is a logical statement?
Uncertainty tradeoffs • Bayesian networks: Nice theoretical properties combined with efficient reasoning make BNs very popular; limited expressiveness, knowledge engineering challenges may limit uses • Nonmonotonic logic: Represent commonsense reasoning, but can be computationally very expensive • Certainty factors: Not semantically well founded • Dempster-Shafer theory: Has nice formal properties, but can be computationally expensive, and intervals tend to grow towards [0,1] (not a very useful conclusion) • Fuzzy reasoning: Semantics are unclear (fuzzy!), but has proved very useful for commercial applications