1 / 13

Logical inference

Logical inference . Chapter 7. Reminders…. Propositional logic: A ‘model’ is a set of propositional statements and truth values. Easy to implement as a …. … hashtable Semantics: rules for determining the truth of a sentence. For example. Model = {‘P’:true, ‘Q’:false, ‘R’:true}

lane-walter
Download Presentation

Logical inference

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Logical inference Chapter 7

  2. Reminders… • Propositional logic: A ‘model’ is a set of propositional statements and truth values. • Easy to implement as a …. • … hashtable • Semantics: rules for determining the truth of a sentence.

  3. For example • Model = {‘P’:true, ‘Q’:false, ‘R’:true} • Is ‘P & Q’ true? • Is ‘P | Q’ true? • Is ‘P & ~Q & (Q | R)’ true? • Is ‘P & ~Q & (~Q | T)’ true?

  4. Inference by checking all

  5. Consider the reflex agent… • How can we create a reflex agent that learns, using propositional logic? • Perceptlist: • [‘mate’,’flee’,0] • [‘smallenemy’,’fight’,1] • Conclusions: • Mate -> ~ Flee • SmallEnemy -> Fight • Flee? Fight? Eat? Mate? • Given (current) KB, does KB entail action?

  6. The well-tempered inference engine • What do we want in an inference engine? • Soundness, every conclusion derived is correct. • Completeness, every correct conclusion is derivable. • Efficient (of at most polynomial complexity). • Sorry, but “every known inference algorithm for propositional logic has a worse-case complexity that is exponential in the size of the input,” where “input” is the number of propositions.

  7. So let’s give up? • No, accept some limitations • Soundness? • Completeness? • Efficiency?

  8. Some inferential terms • Logical equivalence (these are meta-statements) • ~~ ==  • ~( | ) == (~ & ~) • ~(&) == (~ | ~) • Validity • Sentence is True in every model • Satisfiability • a sentence  is true in model m, then  satisfies m.

  9. Some inference theorems • Deduction theorem • For any sentences  and ,  entails  iff the sentence ( -> ) is valid. • Proof by refutation/reductio ad absurum •  entails  iff ( -> ~) is unsatisfiable. • P -> Q? Assert ~Q, see if results in contradiction.

  10. Again, unfortunately… • This is NP-complete (probably too hard). • But let’s play lots of tricks • Monotonic assumption: number of entailed sentences can only increase as information is added to the KB. • Place our attention on what we care about. • But it’s still NP-complete.

  11. Completeness through resolution • Does proof by contradiction • I.e., KB entails  by proving (KB & ~ ) is unsatisfiable. • Put in “conjuctive normal form” • Only “conjunction of disjunctions of [possibly negated] literals” • See algorithm in book.

  12. Back to Reflex Agent

  13. Consider the reflex agent again… • How can we create a reflex agent that learns, using propositional logic? • Perceptlist: • [‘mate’,’flee’,0] • [‘smallenemy’,’fight’,1] • Conclusions: • Mate -> ~ Flee • SmallEnemy -> Fight • Flee? Fight? Eat? Mate? • Given (current) KB, does KB entail action?

More Related