320 likes | 465 Views
Logic in Artificial Intelligence (this lecture is an informal introduction only). Human Logic. Humans are, among other things, information processors. We acquire information about the world and use this information to further our ends.
E N D
Logic in Artificial Intelligence(this lecture is an informal introduction only)
Human Logic • Humans are, among other things, information processors. • We acquire information about the world and use this information to further our ends. • One of the strengths of human information processing is our ability to represent and manipulate logical information.
A simple puzzle • We are given some facts about the arrangement of five blocks in a stack, and we are asked to determine their exact arrangement. • The sentences shown on the left constitute the premises of the problem. • Conclusions: • The red block is on the green block. • The green block is on the yellow block. • The yellow block is on the blue block. • The blue block is on the black block. • The black block is directly on the table. • The red block is on the green block. • The green block is somewhere above the blue block. • The green block is not on the blue block. • The yellow block is on the green block or the blue block. • There is some block on the black block.
Proof • Unfortunately, it is not always apparent which conclusions may be safely drawn from a given set of premises. • What's more, even when we are given the conclusions, as in this example, their correctness may not be immediately obvious. • In order to persuade others of a conclusion that we have drawn, it is useful to write down a proof, i.e. a series of intermediate conclusions, such that each is immediately obvious. • As an example, consider the following informal proof: • We are told that the yellow block is on the green block or the blue block. • We are also told that the red block is on the green block. • Given the assumption that there can be only one block on another and that a block cannot be two colors at once, we can conclude that the yellow block is not on the green block. • But then, by elimination, the yellow block must be on the blue block.
Aristotle (384-322 B.C.E.) • The concept of proof, in order to be meaningful, requires that we be able to recognize certain reasoning steps as immediately obvious. • In other words, we need to be familiar with the reasoning “atoms” out of which complex proof “molecules” are built. • One of Aristotle's great contributions to philosophy was his recognition that what makes a step of a proof immediately obvious is its form rather than its content. • It does not matter whether you are talking about blocks or stocks or automobiles. What matters is the structure of the facts with which you are working.
Examples • All Accords are Hondas. • All Hondas are Japanese. • Therefore, all Accords are Japanese. • All borogoves are slithy toves. • All slithy toves are mimsy. • Therefore, all borogoves are mimsy. • What's more, in order to reach these conclusion, we do not need to know anything about Hondas and Accords or borogoves and slithy toves or what it means to be mimsy. • What is interesting about these examples is that they share the same reasoning structure, with respect to the pattern shown below. • All x are y. • All y are z. • Therefore, all x are z.
Questions • Which patterns are correct? • How many patterns are enough?
Unsound Patterns • Pattern • All x are y. • Some y are z. • Therefore, some x are z. • Good Instance • All Toyotas are Japanese cars. • Some Japanese cars are made in America. • Therefore, some Toyotas are made in America. • Not-So-Good Instance • All Toyotas are cars. • Some cars are Porsches. • Therefore, some Toyotas are Porsches.
Induction - Unsound • I have seen 1000 black ravens. • I have never seen a raven that is not black. • Therefore, every raven is black.
Abduction - Unsound • If there is no fuel, the car will not start. • If there is no spark, the car will not start. • There is spark. • The car will not start. • Therefore, there is no fuel. • What if the car is in a vacuum chamber? • Abduction, or abductive reasoning, is the process of inference to the best explanation.
Soundness • What distinguishes a correct pattern from one that is incorrect is that it must always lead to correct conclusions, i.e. conclusions that are true whenever the premises are true. • As we will see, this is the defining criterion for what we call deduction.
Deduction - Sound • Logical Entailment/Deduction: • Does not say that conclusion is true in general • Conclusion true whenever premises are true • Leibnitz: • The intellect is freed of all conception of the • objects involved, and yet the computation yields the correct • result. • Russell: • Math may be defined as the subject in which we • never know what we are talking about nor whether what we • are saying is true.
Formal Logic • Algebra • 1. Formal language for encoding information • 2. Legal transformations • Logic • 1. Formal language for encoding information • 2. Legal transformations
Algebra Problem • Xavier is three times as old as Yolanda. Xavier's age and • Yolanda's age add up to twelve. How old are Xavier and • Yolanda? • x - 3y = 0 • x + y = 12 • -4y = -12 • y = 3 • x = 9
Logic Problem • If Amy loves Pat, then Amy loves Quincy. • If it is Monday and raining, then Amy loves Pat or Quincy. • If it is Monday and raining, does Amy love Quincy?
Formalization Simple sentences: Amy loves Pat. loves(amy, pat) Amy loves Quincy. loves(amy, quincy) It is Monday. ismonday It’s raining. raining Premises: If Amy loves Pat, Amy loves Quincy. loves(amy,pat) Þ loves(amy,quincy) If it Monday and raining, Amy loves Pat or Quincy. ismonday rainingÞ loves(amy,pat) Ú loves(amy,quincy) Question: If it is Monday and raining, does Amy love Quincy?
Rule of Inference (Resolution) • p1Ù ... Ù pkÞ q1Ú... Ú ql • r1Ù... Ù rmÞ s1Ú ... Ú sn • p1Ù ... Ù pkÙ r1Ù... Ù rmÞ q1Ú... Ú qlÚ s1 Ú... Ú sn • If pion the left hand side of one sentence is the same as qjin the right hand side of the other sentence, it is okay to drop the two symbols.
Two Tricks • A sentence Þp asserts that p is true (i.e. a fact) • A sentence p Þ asserts that p is false
Modus Ponens - special case of Resolution • p Þ q • p • q • monday Þ alex is teaching AI • monday • alex teaching AI • Using the tricks: • p Þ q • Þ p • p Þ p Ù q • q, i.e. q
Solving Amy's love life… • p1Ù ... Ù pkÞ q1Ú... Ú ql • r1Ù... Ù rmÞ s1Ú ... Ú sn • p1Ù ... Ù pkÙ r1Ù... Ù rmÞ q1Ú... Ú qlÚ s1 Ú... Ú sn • If pion the left hand side of one sentence is the same as qjin the right hand side of the other sentence, drop the two symbols. loves(amy,pat) Þ loves(amy,quincy) ismonday rainingÞ loves(amy,pat) Ú loves(amy,quincy) loves(amy,pat) ismonday rainingÞ loves(amy,quincy) Ú loves(amy,pat)Ú loves(amy,quincy) ismonday rainingÞ loves(amy,quincy) Ú loves(amy,quincy) ismonday rainingÞ loves(amy,quincy)
Prolog as a Theorem Prover • Prolog is by far the most widely used logic programming language. • However, Prolog programs are sets of implications whose premise is a conjunction of positive literals and whose conclusion is a single positive literal. • An implication with no premises simply asserts a given proposition – sometimes called a fact. • Hence, the Amy’s love life can’t be solved with Prolog. We need a more powerful Theorem Prover.
Kinship • Art is the parent of Bob and Bud. • Bob is the parent of Cal and Coe. • A grandparent is a parent of a parent. • Þ p(art,bob) • Þ p(art,bud) • Þ p(bob,cal) • Þ p(bob,coe) • p(x, y) Ù p(y, z) Þ g(x, z)
Is Art the Grandparent of Coe? • Let’s assume g(art, coe) is not true, i.e. g(art, coe) Þ. • 1. Þp(art, bob) • 2. Þp(art, bud) • 3. Þp(bob,cal) • 4. Þp(bob,coe) • p(x, y)Ù p(y, z) Þ g(x,z) • 6. g(art, coe) Þ By assumption (see above) • 7. p(art, y) Ùp(y,coe) Þ 5,6 (bind vars by unification) • 8. p(bob,coe) Þ 1,7 (bind vars by unification) • 9. Þ 4,8 • Nothing means contradiction!
Who Are the Grandchildren of Art? • 1. Þp(art, bob) • 2. Þp(art, bud) • 3. Þp(bob,cal) • 4. Þp(bob,coe) • p(x, y)Ù p(y, z) Þ g(x,z) • 6. g(art, z) Þgoal(z) • 7. p(art, y) Ùp(y, z) Þgoal(z) 5,6 • 8. p(bob, z) Þgoal(z) 1,7 • 9. p(bud, z) Þgoal(z) 2,7 • 10. goal(cal) 3,8 • 11. goal(coe) 4,8
Who Are the Grandchildren of Art? In SQL • 1. Þp(art, bob) • 2. Þp(art, bud) • 3. Þp(bob,cal) • 4. Þp(bob,coe) • p(x, y)Ù p(y, z) Þ g(x,z) • 6. g(art, z) Þgoal(z) p(x, y) Ù p(y, z)Þg(x, z) CREATE VIEW g(x,y) AS SELECT p1.x, p2.y FROM p AS p1, p AS p2 WHERE p1.y = p2.x; g(art, z)Þgoal(z) SELECT g.y FROM g WHERE g.x = 'art';
Agatha • ; Someone who lives in Dreadsbury Mansion killed Aunt Agatha.Agatha, the • ; butler, and Charles live in Dreadsbury Mansion and are the only people who • ; live therein.A killer always hates his victim, and is never richer than • ; his victim.Charles hates no one that Agatha hates.Agatha hates everyone • ; except the butler.The butler hates everyone not richer than Aunt Agatha. • ; The butler hates everyone Agatha hates. No one hates everybody. Aunt Agatha • ; is not the butler. Prove that Aunt Agatha killer herself. • => Killed(Agatha,Agatha) | Killed(butler,Agatha) | Killed(Charles,Agatha) • => Is(x,Agatha) | Is(x,butler) | Is(x,Charles) • Killed(x,y) => Hates(x,y) • Killed(x,y) => ~Richer(x,y) • Hates(Agatha,x) => ~Hates(Charles,x) • =>Hates(Agatha,Agatha) • =>Hates(Agatha,Charles) • =>~Hates(Agatha,butler) • ~Richer(x,Agatha) => Hates(butler,x) • Hates(Agatha,x) => Hates(butler,x) • =>~Hates(x,Agatha) | ~Hates(x,butler) | ~Hates(x,Charles) • =>~Is(Agatha,butler) • =>~Killed(Agatha,Agatha)
Agatha into clausal form • Killed(Agatha(),Agatha()) | Killed(butler(),Agatha()) | Killed(Charles(),Agatha()) • Is(x,Agatha()) | Is(x,butler()) | Is(x,Charles()) • ~Killed(x,y) | Hates(x,y) • ~Killed(x,y) | ~Richer(x,y) • ~Hates(Agatha(),x) | ~Hates(Charles(),x) • Hates(Agatha(),Agatha()) • Hates(Agatha(),Charles()) • ~Hates(Agatha(),butler()) • Richer(x,Agatha()) | Hates(butler(),x) • ~Hates(Agatha(),x) | Hates(butler(),x) • ~Hates(x,Agatha()) | ~Hates(x,butler()) | ~Hates(x,Charles()) • ~Is(Agatha(),butler()) • negated_conclusion • ~Killed(Agatha(),Agatha())
Theo – Theorem Prover • Axioms: • 1#>KilledAgathaAgatha KilledbutlerAgatha KilledCharlesAgatha • 2: IsxAgatha Isxbutler IsxCharles • 3 >~Killedxy Hatesxy • 4 >~Killedxy ~Richerxy • 5 >~HatesAgathax ~HatesCharlesx • 6 >HatesAgathaAgatha • 7 >HatesAgathaCharles • 8: ~HatesAgathabutler • 9 >RicherxAgatha Hatesbutlerx • 10 >~HatesAgathax Hatesbutlerx • 11 >~HatesxAgatha ~Hatesxbutler ~HatesxCharles • 12: ~IsAgathabutler • Negated conclusion: • 13S>~KilledAgathaAgatha • Phase 0 clauses used in proof: • 14#>(13a*1a) KilledbutlerAgatha KilledCharlesAgatha • 15 >(9a*4b) Hatesbutlerx ~KilledxAgatha • 17 >(5b*3b) ~HatesAgathax ~KilledCharlesx • 18#>(17b*14b) ~HatesAgathaAgatha KilledbutlerAgatha • 19S>(18a*6a) KilledbutlerAgatha • 20S>(19a*15b) Hatesbutlerbutler • 21#>(20a*11b) ~HatesbutlerAgatha ~HatesbutlerCharles • 22S>(19a*3a) HatesbutlerAgatha • 23S>(22a*21a) ~HatesbutlerCharles • 24S>(23a*10b) ~HatesAgathaCharles • Phases 1 and 2 clauses used in proof: • 25S>(24a,7a) []
History • Theorem Provers have come up with novel mathematical results. • The most famous of these concerns Robbins Algebra. • In 1933, E. V. Huntington presented the following basis for Boolean algebra: • x + y = y + x. [commutativity] • (x + y) + z = x + (y + z). [associativity] • n(n(x) + y) + n(n(x) + n(y)) = x. [Huntington equation] • Shortly thereafter, Herbert Robbins conjectured that the Huntington equation can be replaced with a simpler one: • n(n(x + y) + n(x + n(y))) = x. [Robbins equation]
The problem was first posed in the 1930s by Dr. Herbert Robbins, a New Jersey Professor of Mathematics at Rutgers University. • Robbins said that he worked on the problem for some time, and then passed it on one of the century's most famous logicians, Dr. Albert Tarski of Stanford University. Tarski, who is now dead, worked on the problem, included it in a book, and handed it out to graduate students and visitors. • Burris (University of Waterloo), for example said that Tarski suggested the problem to him in the early 1970s, while he was visiting Stanford for a couple of months. Tarski, he said, "liked to throw out challenging problems to people passing through."
While mathematicians were batting around Robbins's problem, computer scientists were striving to see if they could get computers to reason. • Among them was Wos, who started working on automated reasoning in the 1960s. It was a time when computers were primitive, clunky and slow, and researchers were divided on how to proceed. • On October 10, 1996, after eight days of computation, EQP (a version of OTTER) found a proof.