1 / 20

Axioms

Axioms. Let W be statements known to be true in a domain An axiom is a rule presumed to be true An axiomatic set is a collection of axioms Given an axiomatic set A, the domain theory of A, domTH(A) is the collection of all things that can be derived from A. W  domTh(A). domTh(A)  W.

gero
Download Presentation

Axioms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Axioms • Let W be statements known to be true in a domain • An axiom is a rule presumed to be true • An axiomatic set is a collection of axioms • Given an axiomatic set A, the domain theory of A, domTH(A) is the collection of all things that can be derived from A

  2. W  domTh(A) domTh(A) W Axioms (II) • A problem frequently studied by mathematicians: Given W can we construct a (finite) axiomatic set, A, such that domTH(A) = W? • Potential difficulties: • Inconsistency: • Incompleteness: • Theorem (Goedel): Any axiomatic set for the Arithmetic is either inconsistent and/or incomplete

  3. Probability distribution Today! • Expected value • Conditional probability • Axioms of probability • Bruno de Finetti’s Theorem Recap from Previous Class • First-order logic is not sufficient for many problems • We have only a degree of belief (a probability) Decision Theory = probability theory + utility theory

  4. Utility of A Decision CSE 395/495 Resources: Russell and Norwick’s book

  5. Two Famous Quotes “To judge what one must due to obtain a good or avoid an evil, it is necessary to consider not only the good and the evil itself, but also the probability that it happens or not happen” Arnauld, 1692 “… so they go in a strange paradox, decided only to be undecided, resolved to be irresolute, adamant for drift, solid for fluidity all powerful to be impotent” Churchill, 1937

  6. Utility Function • The utility captures an agent’s preference U: States  [0,) • Given an action A, let Result1(A), Result2(A), … denote the possible outcomes of A • Let Do(A) indicates that action A is executed and E be the available evidence • Then, the expected utility EU(A|E): EU(A|E) = i P(Resulti(A) | E, Do(A)) U(Resulti(A))

  7. Suppose that taken actions update probabilities of states/actions. Which actions should be taken? ? 1. Calculate probabilities of current state 2. Calculate probabilities of the actions 3. Select action with the highest expected utility ? ? Principle of Maximum Expected Utility (MEU) An agent should choose an action that maximizes EU MEU says choose A for state S such that for any other action A’ if E is the known evidence in S, then EU(A|E)  EU(A’|E)

  8. However: It is adequate if the utility function reflects the performance by which one’s behavior are judged Example? MEU Doesn’t Solve All AI Problems EU(A|E) = i P(Resulti(A) | E, Do(A))U(Resulti(A)) Difficulties: • State and U(Resulti(A)) might not be known completely • Computing P(Resulti(A) | E, Do(A)) requires a causal model. Computing it is NP-complete Grade vs. knowledge

  9. Lotteries • We will define the semantics of preferences to define the utility • Preferences are defined on scenarios, called lotteries • A lottery L with two possible outcomes: A with probability p and B with probability (1– p), written • L =[p, A; (1– p), B] • The outcome of a lottery can be an state or another lottery

  10. Preferences • Let A and B be states and/or lotteries, then: • A  B denotes A is preferred to B • A ~ B denotes A is indifferent to B • A  B denotes either A  B or A ~ B

  11. If A  B  C then p exists such that [p, A; (1– p), C] If A ~ B then for any C [p, A; (1– p), C] ~ Axioms of the Utility Theory • Orderability • Transitivity • Continuity • Substitutability A  B or B  A or A ~ B If A  B and B  C then A  C ~ B [p, B; (1– p), C]

  12. If A  B then p  q iff [p, A; (1– p), B] [q, A; (1– q), B] Axioms of the Utility Theory (II) • Monotonicity • Decomposibility (“No fun in gambling”)  [p, A; (1– p), [q, B; (1– q), C] ] ~ [p, A; (1– p)q, B ; (1– p)(1– q), C ]

  13. Maximum Expected Utility principle MEU([p1,S1; p2,S2; … ; pn,Sn]) = Axioms of the Utility Theory (III) • Utility principle U: States  [0,) A  B iff A ~ B iff U(A) > U(B) U(A) = U(B) ipiU(Si)

  14. ([0.5,0; 0.5,3’000.000]) = 1’500.000 This utility is called the expected monetary value Example Suppose that you are in a TV show and you have already earned 1’000.000 so far. Now, the presentator propose you a gamble: he will flip a coin if the coin comes up heads you will earn 3’000.000. But if it comes up tails you will loose the 1’000.000. What do you decide? First shot: U(winning $X) = X MEU

  15. Example (II) If we use the expected monetary value of the lottery does it take the bet? Yes!, because: MEU([0.5,0; 0.5,3’000.000]) = 1’500.000 > MEU([1,1’000.000; 0,3’000.000]) = 1’000.000 But is this really what you would do? Not me!

  16. = 7.5 = 8 U No! $ Example (III) Second shot: Let S = “my current wealth” S’ = “my current wealth” + $1’000.000 S’’ = “my current wealth” + $3’000.000 MEU(Accept) = MEU(Decline) = 0.5U(S) + 0.5U(S’’) U(S’) 0.5U(S) + 0.5U(S’’) U(S’) If U(S) = 5, U(S’) = 8, U(S’’) = 10, would you accept the bet?

  17. Human Judgment and Utility • Decision theory is a normative theory: describe how agents should act • Experimental evidence suggest that people violate the axioms of utility Tversky and Kahnerman (1982) and Allen (1953): • Experiment with people • Choice was given between A and B and then between C and D: C: 20% chance of $4000 D: 25% chance of $3000 A: 80% chance of $4000 B: 100% chance of $3000

  18. 0.8U($4000) U($3000) 0.2U($4000) 0.25U($3000) Human Judgment and Utility (II) • Majority choose B over A and C over D If U($0) = 0 MEU([0.8,4000; 0.2,0]) = MEU([1,3000; 0,4000]) = Thus, 0.8U($4000) < U($3000) MEU([0.2,4000; 0.8,0]) = MEU([0.25,3000; 0.65, 0]) = Thus, 0.2U($4000) > 0.25U($3000) Thus, there cannot be no utility function consistent with these values

  19. Human Judgment and Utility (III) • The point is that it is very hard to model an automatic agent that behaves like a human (back to the Turing test) • However, the utility theory does give some formal way of model decisions and as such is used to support user’s decisions • Same can be said for similarity in CBR

  20. Homework You saw the discussion on the utility relative to the “talk show” example. Do again an analysis of the airport location that was given last class but this time around your discussion should be centered around how to define utility rather than the expected monetary value.

More Related