550 likes | 558 Views
Explore logical agents and knowledge-based systems in artificial intelligence. Understand models, entailment, inference rules, and theorem proving. Learn about Propositional Logic, resolution, and efficient model checking.
E N D
CS3243: Introduction to Artificial Intelligence Semester 2, 2017/2018
Admin: Midterm Exam (5 Mar) • During lecture time: 2-4pm: start at 2:05 exactly! • Restricted Open Book: • AIMA 3rd edition textbook • Lecture notes (minimally annotated) • Tutorial questions & solutions • 1 A4-sized double-sided helpsheet • Bring calculator and pen! Not allowed to use pencil • No make up exam, 20% of CA, please attend!
Logical Agents AIMA Chapter 7
Outline • Knowledge-based agents • Logic in general: models and entailment • Propositional (Boolean) logic • Equivalence, validity, satisfiability • Inference rules and theorem proving • Resolution • Forward chaining • Backward chaining • Efficient model checking
Knowledge-Based Agents • Until now – trying to find an optimal solution via search. • No real model of what the agent knows. • Chess playing agent does not know that pieces cannot move off the board (need to fully search the state space to see what moves are illegal). • This class: represent agent domain knowledge using logical formulas.
Knowledge Base (KB) Inference Engine Domain-independent algorithms Knowledge Base Domain-specific content • Knowledge base = set of sentences in aformal language • Declarative approach to building an agent (or other system): • Tell it what it needs to know • Then it can Ask itself what to do - answers should follow from the KB • Agents can be viewed at the knowledge level i.e., specify knowledge and goals, regardless of implementation • Or at the implementation level • i.e., data structures in KB and algorithms that manipulate them
What is the best action at time ? What did I perceive at time ? What happened? What have I done?
Wumpus World Performance Measure? Environment? Actuators? Sensors?
Exploring aWumpusWorld Agent’s view
Exploring aWumpusWorld Agent’s view No Breeze! No Stench at
Exploring aWumpusWorld Agent’s view
Exploring aWumpusWorld Agent’s view
Logic in General • Logic: formal language for KR, infer conclusions • Syntax:defines the sentences in the language • Semantics:define the “meaning” of sentences; • i.e., define truth of a sentence in a world • E.g., language of arithmetic • is a sentence; is not a sentence • is true in a world where • is false in a world where
Entailment • Modeling: models if is true under . For example, what are models for the following? • We let be the set of all models for • Entailment means that one thing follows from another: or equivalently • For example: entails .
Entailment in theWumpusWorld • Situation after detecting nothing in [1,1], moving right, breeze in [2,1] • Consider possible models for KB assuming only pits • 3 Boolean choices 8 possible models
Wumpus Models • KB = wumpus-world rules + percepts
Wumpus Models • KB = wumpus-world rules + percepts • = “[1,2] is safe”, , proved by model checking • The agent can infer that [1,2] is safe
Wumpus Models • KB = wumpus-world rules + percepts • = “[2,2] is safe”, • The agent cannot infer that [2,2] is safe (or unsafe)!
Inference • Define to be “sentence is derived from by inference algorithm ” • Soundness: is sound if implies . “don’t infer nonsense” • Completeness: is complete if , implies . “If it’s implied, it can be inferred” • Inference algorithm: is a sentence is derived from ? “Entailment is like the needle () being in the haystack () and inference is like finding it” Is an inference algorithm complete and sound?
Completeness: is complete if whenever , it is also true that All possible sentences entailed by • An incomplete inference algorithm cannot reach all possible conclusions • Equivalent to completeness in search (chapter 3) Sentences derived from using Original
Propositional Logic: Syntax • A simple logic – illustrates basic ideas • Defines allowable sentences • Sentences are represented by symbols e.g. , • Logical connectives for constructing complex sentences from simpler ones: • If is a sentence, is a sentence (negation) • Ifandare sentences: • is a sentence (conjunction) • is a sentence (disjunction) • is a sentence (implication) • is a sentence (biconditional)
Propositional Logic: Semantics A model is then just a truth assignment to the basic variables. If a model has variables, how many truth assignments are there? All other sentences’ truth value is derived according to logical rules.
Knowledge Base for Wumpus World • there is a pit in . • there is breeze in • Rules: • “Pits cause breezes in adjacent squares” KB is true iffis true
Inference • Given a knowledge base, infer something non-obvious about the world. • Mimic logical human reasoning • After exploring 3 squares, we have some understanding of the Wumpus world • Inference Deriving knowledge out of percepts Given and , we want to know if
Truth Table for Inference • Does entail ? • Can we infer that is safe from pits?
Inference by Truth-Table Enumeration • Depth-first enumeration of all models is sound and complete • For symbols, time complexity is , space complexity is Check all possible truth assignments
Validity andSatisfiability A sentence is valid if it is true in all models, e.g., , , , Validity is connected to entailment via the Deduction Theorem: iffis valid A sentence is satisfiable if it is true in some model e.g.,, A sentence is unsatisfiable if it is true in no models e.g., Satisfiability is connected to entailment via the following: if and only if is unsatisfiable
Applying Inference Rules : • Equivalent to a search problem • Initial state: initial • States: s • Actions: Inference rules • Transition model: The result of an action is to add the new sentence to current • Goal: contains sentence to prove • Examples of inference rules • And-Elimination (A.E.): • Modus Ponens (M.P.): • Logical Equivalences: A.E. M.P. : :
Resolution for Conjunctive Normal Form (CNF) • conjunctionof “disjunctions of literals” (clauses) • E.g., • Resolution: if a literal appears in and its negation appears in , it can be deleted:(delete duplicates as necessary) • Resolution is sound and complete for propositional logic
Conversion to CNF: the Rules • Convert to • Convert to • Move inwards using De Morgan and double negation • Convert to • Convert to • Convert to • Convert to
Resolution Algorithm • Proof by contradiction: show that is unsatisfiable What does an empty clause imply??
Resolution Example Negate the premise via proof by contradiction
Forward and Backward Chaining • Horn Form (restricted) • = conjunction of Horn clauses • Horn clause = definite clause or goal clause • Definite clause : • Goal clause : • e.g., • Inference with Horn clauses: forward chaining or backward chaining algorithms. Easy to interpret, run in linear time • Inference is Modus Ponens (for Horn Form): sound for Horn
Forward Chaining (FC) KB of horn clauses AND-OR graph • Idea: Fire any rule whose premise is satisfied in the , add its conclusion to the , repeat until query is found
Forward Chaining (FC) Algorithm • For every rule , let be the number of symbols in ’s premise. • For every symbol , let be initially • Let be a queue of symbols (initially containing all symbols known to be true. • While : • pop a symbol from ; if it is we’re done • Set • For each clause such that is in the premise of , decrement . If , add ’s conclusion to . Forward chaining is sound and complete for Horn
Forward Chaining Example Iteration 1: Iteration 2: Iteration 3: Iteration 4: Iteration 5: Iteration 6: Iteration 7: Iteration 8:
Proof of Completeness FC derives every atomic sentence entailed by Horn • Suppose FC reaches a fixed point where no new atomic sentences are derived • Consider the final state as a model that assigns true/false to symbols based on the inferred table • Every clause in the original is true in • Hence, is a model of • If , then is true in every model of , including
Backward Chaining (BC) Backtracking depth-first search algorithm Idea: work backwards from the query • To prove by BC, • check if is known already, or • prove by BC the premise of some rule concluding • Avoid loops: check if new subgoal is already on the goal stack • Avoid repeated work: check if new subgoal • has already been proven true, or • has already failed
Forward Chaining Example Hit a loop! Try something else
Efficient Propositional Model Checking Two families of efficient algorithms for propositional model checking: • Complete backtracking search algorithms • DPLL algorithm (Davis, Putnam, Logemann, Loveland) • Incomplete local search algorithms • WalkSATalgorithm These algorithms test a sentence for satisfiability; used for inference. Recall: Satisfiability is connected to entailment via if and only if is unsatisfiable
DPLL Algorithm How are DPLL and CSP related? Determine if a given CNF formula is satisfiable Improvements over truth table enumeration: • Early termination (a) A clause is true iffany literal in it is true. (b) The formula is false if any clause is false. • Pure symbol heuristic Pure symbol: always appears with the same “sign” in all clauses. e.g., in ,andare pure; is impure. Make a pure symbol’s literal true: Doing this can never make a clause false. Ignore clauses that are already true in the model constructed so far. • Unit clause heuristic Unit clause: only one literal in the clause. The only literal in a unit clause must be true. Least constraining value Most constrained variable
DPLL Algorithm Early Termination Try to apply heuristics If it doesn’t work, brute force.