650 likes | 835 Views
CSM6120 Introduction to Intelligent Systems. Propositional and Predicate Logic 2. Syntax and semantics. Syntax Rules for constructing legal sentences in the logic Which symbols we can use (English: letters, punctuation) How we are allowed to combine symbols Semantics
E N D
CSM6120Introduction to Intelligent Systems Propositional and Predicate Logic 2
Syntax and semantics • Syntax • Rules for constructing legal sentences in the logic • Which symbols we can use (English: letters, punctuation) • How we are allowed to combine symbols • Semantics • How we interpret (read) sentences in the logic • Assigns a meaning to each sentence • Example: “All lecturers are seven foot tall” • A valid sentence (syntax) • And we can understand the meaning (semantics) • This sentence happens to be false (there is a counterexample)
Propositional logic • Syntax • Propositions, e.g. “it is wet” • Connectives: and, or, not, implies, iff (equivalent) • Brackets (), T (true) and F (false) • Semantics (Classical/Boolean) • Define how connectives affect truth • “P and Q” is true if and only if P is true and Q is true • Use truth tables to work out the truth of statements
Important concepts • Soundness, completeness, validity (tautologies) • Logical equivalences • Models/interpretations • Entailment • Inference • Clausal forms (CNF, DNF)
Resolution algorithm • Given formula in conjunctive normal form, repeat: • Find two clauses with complementary literals, • Apply resolution, • Add resulting clause (if not already there) • If the empty clause results, formula is not satisfiable • Must have been obtained from P and NOT(P) • Otherwise, if we get stuck (and we will eventually), the formula is guaranteed to be satisfiable
Example • Our knowledge base: 1) FriendWetBecauseOfSprinklers 2) NOT(FriendWetBecauseOfSprinklers) OR SprinklersOn • Can we infer SprinklersOn? We add: 3) NOT(SprinklersOn) • From 2) and 3), get 4) NOT(FriendWetBecauseOfSprinklers) • From 4) and 1), get empty clause = SpinklersOn entailed
Horn clauses • A Horn clause is a CNF clause with at most one positive literal • The positive literal is called the head, negative literals are the body • Prolog: head:- body1, body2, body3 … • English: “To prove the head, prove body1, …” • Implication: If (body1, body2 …) then head • Horn clauses form the basis of forward and backward chaining • The Prolog language is based on Horn clauses • Modus ponens: complete for Horn KBs • Deciding entailment with Horn clauses is linear in the size of the knowledge base
Reasoning with Horn clauses • Chaining: simple methods used by most inference engines to produce a line of reasoning • Forward chaining (FC) • For each new piece of data, generate all new facts, until the desired fact is generated • Data-directed reasoning • Backward chaining (BC) • To prove the goal, find a clause that contains the goal as its head, and prove the body recursively • (Backtrack when the wrong clause is chosen) • Goal-directed reasoning
Forward chaining algorithm • Read the initials facts • Begin • Filter_phase: find the fired rules (that match facts) • While fired rules not empty AND not end DO • Choice_phase: Analyse the conflicts, choose most appropriate rule • Apply the chosen rule • End do • End
Forward chaining example (1) • Suppose we have three rules: R1: If A and B then D (= A ˄ B → D) R2: If B then C R3: If C and D then E • If facts A and B are present, we infer D from R1 and infer C from R2 • With D and C inferred, we now infer E from R3
smoky First cycle: R2 holds • fire Second cycle: R1 holds • switch-sprinkler Third cycle: R3 holds Forward chaining example (2) Rules Facts R1: IF hot AND smoky THEN fire R2: IF alarm-beeps THEN smoky R3: IF fire THEN switch-sprinkler • alarm-beeps • hot Action
Forward chaining example (3) • Fire any rule whose premises are satisfied in the KB • Add its conclusion to the KB until the query is found
Forward chaining • AND-OR Graph • Multiple links joined by an arc indicate conjunction – every link must be proved • Multiple links without an arc indicate disjunction – any link can be proved
Forward chaining If we want to apply the forward chaining to this graph we first process the known leaves A and B and then allow inference to propagate up the graph as far as possible
Backward chaining • Idea - work backwards from the query q • To prove q by BC, • Check if q is known already, or • Prove by BC all premises of some rule concluding q (i.e. try to prove each of that rule’s conditions) • Avoid loops • Check if new subgoal is already on the goal stack • Avoid repeated work: check if new subgoal • Has already been proved true, or • Has already failed
Backward chaining example • The same three rules: R1: If A and B then D R2: If B then C R3: If C and D then E • If E is known (or is our hypothesis), then R3 implies C and D are true • R2 thus implies B is true (from C) and R1 implies A and B are true (from D)
IF fire Use R3 IF hot IF smoky Use R1 IF alarm-beeps Use R2 Example • alarm-beeps • hot Facts Rules Hypothesis R1: IF hot AND smoky THEN fire R2: IF alarm-beeps THEN smoky R3: IF fire THEN switch-sprinkler • Should I switch the sprinklers on? Evidence
Backward chaining From hypotheses to relevant facts Good when: Limited number of (clear) hypotheses Determining truth of facts is expensive Large number of possible facts, mostly irrelevant Comparison • Forward chaining • From facts to valid conclusions • Good when: • Less clear hypothesis • Very large number of possible conclusions • True facts known at start
Forward vs. backward chaining • FC is data-driven • Automatic, unconscious processing • E.g. routine decisions • May do lots of work that is irrelevant to the goal • BC is goal-driven • Appropriate for problem solving • E.g. “Where are my keys?”, “How do I start the car?” • The complexity of BC can be much less than linear in size of the KB
Application • Wide use in expert systems • Backward chaining: diagnosis systems • Start with set of hypotheses and try to prove each one, asking additional questions of user when fact is unknown • Forward chaining: design/configuration systems • See what can be done with available components
Checking models • Sometimes we just want to find any model that satisfies the KB • Propositional satisfiability: Determine if an input propositional logic sentence (in CNF) is satisfiable • We use a backtracking search to find a model • DPLL, WalkSAT, etc – lots of algorithms out there! • This is similar to finding solutions in constraint satisfaction problems • More about CSPs in a later module
Topic #3 – Predicate Logic Predicate logic • Predicate logic is an extension of propositional logic that permits concisely reasoning about whole classes of entities • Also termed Predicate Calculus or First Order Logic • The language Prolog is built on a subset of this • Propositional logic treats simple propositions (sentences) as atomic entities • In contrast, predicate logic distinguishes the subject of a sentence from its predicate…
Topic #3 – Predicate Logic Subjects and predicates • In the sentence “The dog is sleeping”: • The phrase “the dog” denotes the subject - the object or entity that the sentence is about • The phrase “is sleeping” denotes the predicate- a property that is true of the subject • In predicate logic, a predicate is modelled as a function P(·) from objects to propositions • i.e. a function that returns TRUE or FALSE • P(x) = “x is sleeping” (where x is any object) • or, Is_sleeping(dog) • Tree(a) is true if a = oak, false if a = daffodil
Topic #3 – Predicate Logic More about predicates • Convention: Lowercase variables x, y, z... denote objects/entities; uppercase variables P, Q, R… denote propositional functions (predicates) • Keep in mind that the result of applying a predicate P to an object x is the proposition P(x) • But the predicate P itself (e.g. P=“is sleeping”) is not a proposition (not a complete sentence) • E.g. if P(x) = “x is a prime number”, P(3) is the proposition “3 is a prime number”
Topic #3 – Predicate Logic Propositional functions • Predicate logic generalizes the grammatical notion of a predicate to also include propositional functions of any number of arguments, each of which may take any grammatical role that a noun can take • E.g. let P(x,y,z) = “x gave y the grade z”, • then ifx=“Mike”, y=“Mary”, z=“A” • P(x,y,z) = “Mike gave Mary the grade A”
Reasoning • KB: (1) student(S)∧studies(S,ai) → studies(S,prolog) (2) student(T)∧studies(T,expsys) → studies(T,ai) (3) student(joe) (4) studies(joe,expsys) • (1) and (2) are rules, (3) and (4) are facts • With the information in (3) and (4), rule (2) can fire (but rule (1) can't), by matching (unifying) joe with T • This gives a new piece of knowledge, studies(joe, ai) • With this new knowledge, rule (1) can now fire • joe is unified with S
Reasoning • KB: (1) student(S)∧studies(S,ai) → studies(S,prolog) (2) student(T)∧studies(T,expsys) → studies(T,ai) (3) student(joe) (4) studies(joe,expsys) • We can apply modus ponens to this twice (FC), to get • studies(joe, prolog) • This can then be added to our knowledge base as a new fact
Clause form • We can express any predicate calculus statement in clause form • This enables us to work with just simple OR (disjunct: ∨) operators rather than having to deal with implication (→) and AND (∧) • thus allowing us to work towards a resolution proof
Example • Let's put our previous example in clause form: (1) ¬student(S) ∨ ¬studies(S, ai) ∨ studies(S,prolog) (2) ¬student(T) ∨ ¬studies(T,expsys) ∨ studies(T,ai) (3) student(joe) (4) studies(joe,expsys) • Is there a solution to studies(S, prolog)? • = “is there someone who studies Prolog?” • Negate it... ¬studies(S, prolog)
Example (1) ¬student(S) ∨ ¬studies(S, ai) ∨ studies(S,prolog) (2) ¬student(T) ∨ ¬studies(T,expsys) ∨ studies(T,ai) (3) student(joe) (4) studies(joe,expsys) • Resolve with clause (1): ¬student(S) ∨ ¬studies(S,ai) • Resolve with clause (2) (S = T): ¬student(S) ∨ ¬studies(S,expsys) • Resolve with clause (4) (S = joe): ¬student(joe) • Finally, resolve this with clause (3), and we have nothing left – the empty clause ¬studies(S, prolog)
Example • These are all the same logically... (1) student(S) ∧ studies(S,ai) → studies(S,prolog) (2) student(T) ∧ studies(T,expsys) → studies(T,ai) (3) student(joe) (4) studies(joe,expsys) (1) ¬student(S) ∨ ¬studies(S, ai) ∨ studies(S,prolog) (2) ¬student(T) ∨ ¬studies(T,expsys) ∨ studies(T,ai) (3) student(joe) (4) studies(joe,expsys) (1) studies(S,prolog) ← student(S) ∧ studies(S,ai) (2) studies(T,ai) ← student(T) ∧ studies(T,expsys) (3) student(joe) ← (4) studies(joe,expsys) ←
Note the last two... • Joe is a student, and is studying expsys, so it's not dependent on anything – it's a statement/fact • So there is nothing to the right of the ← (3) student(joe) ← (4) studies(joe,expsys) ←
Topic #3 – Predicate Logic Universes of discourse • The power of distinguishing objects from predicates is that it lets you state things about many objects at once • E.g., let P(x)=“x+1>x”. We can then say,“For any number x, P(x) is true” instead of(0+1>0) (1+1>1) (2+1>2) ... • The collection of values that a variable x can take is called x’s universe of discourse (or simply ‘universe’)
Topic #3 – Predicate Logic Quantifier expressions • Quantifiers provide a notation that allows us to quantify (count) how many objects in the universe of discourse satisfy a given predicate • “” is the FORLL or universal quantifier • x P(x) means for all x in the universe, P holds • “” is the XISTS or existential quantifier • x P(x) means there exists an x in the universe (that is, 1 or more) such that P(x) is true