1 / 25

Ramtin Raji Kermani, Mehrdad Rashidi, Hadi Yaghoobian

Uncertainty in Artificial Intelligence. Ramtin Raji Kermani, Mehrdad Rashidi, Hadi Yaghoobian. Overview. Uncertainty Probability Syntax and Semantics Inference Independence and Bayes' Rule. sensors. ?. environment. ?. ?. agent. ?. actuators. model. Uncertain Agent.

nikki
Download Presentation

Ramtin Raji Kermani, Mehrdad Rashidi, Hadi Yaghoobian

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Uncertaintyin Artificial Intelligence Ramtin Raji Kermani, Mehrdad Rashidi, Hadi Yaghoobian

  2. Overview • Uncertainty • Probability • Syntax and Semantics • Inference • Independence and Bayes' Rule

  3. sensors ? environment ? ? agent ? actuators model Uncertain Agent

  4. Uncertainty !!! • Let action At = leave for airport t minutes before flight • Will At get me there on time? • Problems: • partial observability (road state, other drivers' plans, etc.) • noisy sensors (traffic reports) • uncertainty in action outcomes (flat tire, etc.) • immense complexity of modeling and predicting traffic • Hence a purely logical approach either • risks falsehood: “A25 will get me there on time”, or • leads to conclusions that are too weak for decision making: • “A25 will get me there on time if there's no accident on the bridge and it doesn't rain and my tires remain intact etc etc.” • (A1440 might reasonably be said to get me there on time but I'd have to stay overnight in the airport …)

  5. Truth & Belief • Ontological Commitment: What exists in the world — TRUTH • Epistemological Commitment: What an agent believes about facts — BELIEF

  6. Types of Uncertainty • Uncertainty in prior knowledgeE.g., some causes of a disease are unknown and are not represented in the background knowledge of a medical-assistant agent • Uncertainty in actionsE.g., actions are represented with relatively short lists of preconditions, while these lists are in fact arbitrary long • Uncertainty in perceptionE.g., sensors do not return exact or complete information about the world; a robot never knows exactly its position

  7. Questions • How to represent uncertainty in knowledge? • How to perform inferences with uncertain knowledge? • Which action to choose under uncertainty?

  8. How do we represent Uncertainty? We need to answer several questions: • What do we represent & how we represent it? • What language do we use to represent our uncertainty? What are the semantics of our representation? • What can we do with the representations? • What queries can be answered? How do we answer them? • How do we construct a representation? • Can we ask an expert? Can we learn from data?

  9. Uncertainty? What we call uncertainty is a summary of all that is not explicitly taken into account in the agent’s KB

  10. Uncertainty? Sources of uncertainty: • Ignorance • Laziness (efficiency?)

  11. Methods for handling uncertainty • Default or nonmonotonic logic: • Assume my car does not have a flat tire • Assume A25 works unless contradicted by evidence • Issues: What assumptions are reasonable? How to handle contradiction? • Rules with fudge factors: • A25 |→0.3 get there on time • Sprinkler |→0.99WetGrass • WetGrass |→0.7Rain • Issues: Problems with combination, e.g., Sprinkler causes Rain?? • Probability • Model agent's degree of belief • Given the available evidence, • A25 will get me there on time with probability 0.04 Creed: The world is fairly normal. Abnormalities are rare So, an agent assumes normality, until there is evidence of the contrary E.g., if an agent sees a bird x, it assumes that x can fly, unless it has evidence that x is a penguin, an ostrich, a dead bird, a bird with broken wings, … Creed: Just the opposite! The world is ruled by Murphy’s Law Uncertainty is defined by sets, e.g., the set possible outcomes of an action, the set of possible positions of a robot The agent assumes the worst case, and chooses the actions that maximizes a utility function in this case Example: Adversarial search Creed: The world is not divided between “normal” and “abnormal”, nor is it adversarial. Possible situations have various likelihoods (probabilities) The agent has probabilistic beliefs – pieces of knowledge with associated probabilities (strengths) – and chooses its actions to maximize the expected value of some utility function

  12. More on deal with uncertainty? • Implicit: • Ignore what you are uncertain of when you can • Build procedures that are robust to uncertainty • Explicit: • Build a model of the world that describe uncertainty about its state, dynamics, and observations • Reason about the effect of actions given the model

  13. Making decisions under uncertainty Suppose I believe the following: P(A25 gets me there on time | …) = 0.04 P(A90 gets me there on time | …) = 0.70 P(A120 gets me there on time | …) = 0.95 P(A1440 gets me there on time | …) = 0.9999 • Which action to choose? Depends on my preferences for missing flight vs. time spent waiting, etc.

  14. Probability • A well-known and well-understood framework for uncertainty • Clear semantics • Provides principled answers for: • Combining evidence • Predictive & Diagnostic reasoning • Incorporation of new evidence • Intuitive (at some level) to human experts • Can be learned

  15. Probability Probabilistic assertions summarize effects of • laziness: failure to enumerate exceptions, qualifications, etc. • ignorance: lack of relevant facts, initial conditions, etc. Subjective probability: • Probabilities relate propositions to agent's own state of knowledge e.g., P(A25 | no reported accidents) = 0.06 These are not assertions about the world Probabilities of propositions change with new evidence: e.g., P(A25 | no reported accidents, 5 a.m.) = 0.15

  16. Decision Theory • Decision Theory develops methods for making optimal decisions in the presence of uncertainty. • Decision Theory = utility theory + probability theory • Utility theory is used to represent and infer preferences: Every state has a degree of usefulness • An agent is rational if and only if it chooses an action that yields the highest expected utility, averaged over all possible outcomes of the action.

  17. Random variables A discrete random variableis a function that • takes discrete values from a countable domain and • maps them to a number between 0 and 1 • Example: Weather is a discrete (propositional) random variable that has domain <sunny,rain,cloudy,snow>. • sunny is an abbreviation for Weather = sunny • P(Weather=sunny)=0.72, P(Weather=rain)=0.1, etc. • Can be written: P(sunny)=0.72, P(rain)=0.1, etc. • Domain values must be exhaustive and mutually exclusive • Other types of random variables: • Boolean random variablehas the domain <true,false>, • e.g., Cavity (special case of discrete random variable) • Continuous random variableas the domain of real numbers, e.g., Temp

  18. Propositions • Elementary proposition constructed by assignment of a value to a random variable: • e.g., Weather =sunny, Cavity = false (abbreviated as cavity) • Complex propositions formed from elementary propositions & standard logical connectives • e.g., Weather = sunny  Cavity = false

  19. Atomic Events • Atomic event: • A complete specification of the state of the world about which the agent is uncertain • E.g., if the world consists of only two Boolean variables Cavity and Toothache, then there are 4 distinct atomic events: Cavity = false Toothache = false Cavity = false  Toothache = true Cavity = true  Toothache = false Cavity = true  Toothache = true • Atomic events are mutually exclusive and exhaustive

  20. A  B A B U Atomic Events, Events & the Universe • The universeconsists of atomic events • An event is a set of atomic events • P: event  [0,1] • Axioms of Probability • P(true) = 1 = P(U) • P(false) = 0 = P() • P(A  B) = P(A) + P(B) – P(A  B)

  21. Prior probability • Prior (unconditional) probability • corresponds to belief prior to arrival of any (new) evidence • P(sunny)=0.72, P(rain)=0.1, etc. • Probability distribution gives values for all possible assignments: • Vector notation: Weather is one of <0.72, 0.1, 0.08, 0.1> • P(Weather) = <0.72,0.1,0.08,0.1> • Sums to 1 over the domain

  22. Joint probability distribution • Probability assignment to all combinations of values of random variables • The sum of the entries in this table has to be 1 • Every question about a domain can be answered by the joint distribution • Probability of a proposition is the sum of the probabilities of atomic events in which it holds • P(cavity) = 0.1 [add elements of cavity row] • P(toothache) = 0.05 [add elements of toothache column]

  23. A B U A  B Conditional Probability • P(cavity)=0.1 and P(cavity  toothache)=0.04 are both prior (unconditional) probabilities • Once the agent has new evidence concerning a previously unknown random variable, e.g., toothache, we can specify a posterior(conditional) probability • e.g., P(cavity | toothache) • P(A | B) = P(A  B)/P(B) [prob of A w/ U limited to B] • P(cavity | toothache) = 0.04/0.05 = 0.8

  24. Conditional Probability (continued) • Definition of Conditional Probability:P(A | B) = P(A  B)/P(B) • Product rule gives an alternative formulation: P(A  B) = P(A | B)  P(B) = P(B | A)  P(A) • A general version holds for whole distributions:P(Weather,Cavity) = P(Weather | Cavity)  P(Cavity) • Chain rule is derived by successive application of product rule: P(X1, …,Xn) = P(X1,...,Xn-1) P(Xn | X1,...,Xn-1) = P(X1,...,Xn-2) P(Xn-1 | X1,...,Xn-2) P(Xn | X1,...,Xn-1) = … =

More Related