270 likes | 413 Views
CS 416 Artificial Intelligence. Lecture 11 Logical Agents Chapter 7. Midterm Exam. Midterm will be on Thursday, March 13 th It will cover material up until Feb 27 th. Reasoning w/ propositional logic. Remember what we’ve developed so far Logical sentences
E N D
CS 416Artificial Intelligence Lecture 11 Logical Agents Chapter 7
Midterm Exam • Midterm will be on Thursday, March 13th • It will cover material up until Feb 27th
Reasoning w/ propositional logic • Remember what we’ve developed so far • Logical sentences • And, or, not, implies (entailment), iff (equivalence) • Syntax vs. semantics • Truth tables • Satisfiability, proof by contradiction
Logical Equivalences • Know these equivalences
Reasoning w/ propositional logic • Inference Rules • Modus Ponens: • Whenever sentences of form a => b and a are giventhe sentence b can be inferred • R1: Green => Martian • R2: Green • Inferred: Martian
Reasoning w/ propositional logic • Inference Rules • And-Elimination • Any of conjuncts can be inferred • R1: Martian ^ Green • Inferred: Martian • Inferrred: Green • Use truth tables if you want to confirm inference rules
Example of a proof P? P? ~P ~B B P? P?
Example of a proof P? ~P ~P ~B B ~P P?
Constructing a proof • Proving is like searching • Find sequence of logical inference rules that lead to desired result • Note the explosion of propositions • Good proof methods ignore the countless irrelevant propositions
Monotonicity of knowledge base • Knowledge base can only get larger • Adding new sentences to knowledge base can only make it get larger • If (KB entails a) • ((KB ^ b) entails a) • This is important when constructing proofs • A logical conclusion drawn at one point cannot be invalidated by a subsequent entailment
How many inferences? • Previous example relied on application of inference rules to generate new sentences • When have you drawn enough inferences to prove something? • Too many make search process take longer • Too few halt logical progression and make proof process incomplete
Resolution • Unit Resolution Inference Rule • If m and li arecomplementaryliterals
Resolution Inference Rule • Also works with clauses • But make sure each literal appears only once
Resolution and completeness • Any complete search algorithm, applying only the resolution rule, can derive any conclusion entailed by any knowledge base in propositional logic • More specifically, refutation completeness • Able to confirm or refute any sentence • Unable to enumerate all true sentences
What about “and” clauses? • Resolution only applies to “or” clauses • Every sentence of propositional logic can be transformed to a logically equivalent conjunction of disjunctions of literals • Conjunctive Normal Form (CNF) • A sentence expressed as conjunction of disjunction of literals • k-CNF: exactly k literals per clause
An algorithm for resolution • We wish to prove KB entails a • Must show (KB ^ ~a) is unsatisfiable • No possible way for KB to entail (not a) • Proof by contradiction
An algorithm for resolution • Algorithm • KB ^ ~a is put in CNF • Each pair with complementary literals is resolved to produce new clause which is added to KB (if novel) • Cease if no new clauses to add (~a is not entailed) • Cease if resolution rule derives empty clause (~a is entailed)
Example of resolution • Proof that there is not a pit in P1,2: ~P1,2 • KB ^ P1,2 leads to empty clause • Therefore ~P1,2 is true
Horn Clauses • Horn Clause • Disjunction of literals with at most one is positive • (~a V ~b V ~c V d) • (~a V b V c V ~d) Not a Horn Clause
Horn Clauses • Can be written as a special implication • (~a V ~b V c) (a ^ b) => c • (~a V ~b V c) == (~(a ^ b) V c) … de Morgan • (~(a ^ b) V c) == ((a ^ b) => c … implication elimination
Horn Clauses • Permit straightforward inference determination • Forward chaining • Backward chaining
Horn Clauses • Permit determination of entailment in linear time (in order of knowledge base size)
Forward Chaining • Properties • Sound • Complete • All entailed atomic sentence will be derived • Data Driven • Start with what we know • Derive new info until we discover what we want
Backward Chaining • Start with what you want to know, a query (q) • Look for implications that conclude q • Goal-Directed Reasoning