410 likes | 583 Views
Probability. Lirong Xia. Thursday, Feb 13, 2014. Good news. The whole Project 1 (including Q6 and Q7) will be reassigned this is the only exception The autograder will be given to you in all project assignments but we may still manually check your codes
E N D
Probability Lirong Xia Thursday, Feb 13, 2014
Good news • The whole Project 1 (including Q6 and Q7) will be reassigned • this is the onlyexception • The autograder will be given to you in all project assignments • but we may still manually check your codes • New project 1 due in one week (Feb 21) • Project 2 due on Feb 28 • Homework 1 before midterm • Ask questions on piazza, come to OH
Asking questions on piazza • Questions that we do not promise to answer • Here are my codes, what’s wrong? • Here are the feedbacks from autograder, what’s wrong? • My heuristic didn’t pass consistency test, why? • Is your heuristic really consistent? • you need to figure out why, and this is an important part of Q6 and Q7 • Questions that will be answered in 24 hours • My algorithm did 1,2,3, but at step 2 it went wrong, the correct answer is X but my output is Y, why? • I have a consistency proof, can you check the correctness? • yes but you need to come to OH with a draft, and explain to me • do not post your solution on piazza • if you find a bug in autograder you will have a bonus point (for the first discovery of the same type) • More rules will be added
Last class • Principle of maximum expected utility • Expecitmax search • search trees with chance nodes • c.f. minimax search • Expectimax search with limited depth • use an evaluation function to estimate the outcome (Q4) • design a better evaluation function (Q5) • c.f. minimax search with limited depth
Expectimax Search Trees • Expectimax search • Max nodes (we) as in minimaxsearch • Chance nodes • Need to compute chance node values as expected utilities • Later, we’ll learn how to formalize the underlying problem as a Markov decision Process
Maximum Expected utility • A random variable represents an event whose outcome is unknown • A probability distribution is an assignment of weights to outcomes • weights sum up to 1 • Example: how long to get to the airport? • Length of driving time as a function of traffic: L(none) = 20, L(light) = 30, L(heavy) = 60 • What is my expected driving time? • Notation: E[L(T)] • Remember, p(T) = {none:0.25, light:0.5, heavy: 0.25} • E[L(T)] = L(none)*p(none)+ L(light)*p(light)+ L(heavy)*p(heavy) • E[L(T)] = 20*0.25+ 30*0.5+ 60*0.25 = 35 • Principle of maximum expected utility • an agent should choose the action that maximizes its expected utility, given its knowledge • in expectimax search, the MAX player should choose a chance node with the maximum expected utility
Expectimax Search • In expectimax search, we have a probabilistic model of how the opponent (or environment) will behave in any state • chance node for every situation out of our control: opponent or environment Having a probabilistic belief about an agent’s action does not mean that agent is flipping any coins!
ExpectimaxPseudocode • Def value(s): If s is a max node return maxValue(s) If s is a chance node return expValue(s) If s is a terminal node return evaluations(s) • Def maxValue(s): values = [value(s’) for s’ in successors(s)] return max(values) • Def expValue(s): values = [value(s’) for s’ in successors(s)] weights = [probability(s,s’) for s’ in successors(s)] return expectation(values, weights)
Expectimax Search with limited depth • Chance nodes • Chance nodes are like min nodes, except the outcome is uncertain • Calculate expected utilities • Chance nodes average successor values (weighted) • Each chance node has a probability distribution over its outcomes (called a model) • For now, assume we’re given the model • Utilities for terminal states • Static evaluation functions give us limited-depth search
Expectimax Evaluation • Evaluation functions quickly return an estimate for a node’s true value (which value, expectimax or minimax?) • For minimax, evaluation function scale doesn’t matter • We just want better states to have higher evaluations • We call this insensitivity to monotonic transformations • For expectimax, we need magnitudes to be meaningful
Mixed Layer Types • E.g. Backgammon • Expectiminimax • MAX node takes the max value of successors • MIN node takes the max value of successors • Chance nodes take expectations, otherwise like minimax
Multi-Agent Utilities • Similar to minimax: • Terminals have utility tuples • Node values are also utility tuples • Each player maximizes its own utility
Today’s schedule • Probability • probability space • events • independence • conditional probability • Bayes’ rule • Random variables • Probabilistic inference
Probability space • A sample space Ω • states of the world, • or equivalently, outcomes • only need to model the states that are relevant to the problem • A probability mass function p over the sample space • for all a∈Ω,p(a)≥0 • Σa∈Ωp(a)=1
Events • An event is a subset of the sample space • can be seen as a “property” of the states of the world • note: this does not need to consider the probability mass function • An atomic event is a single outcome in Ω • Atomic events: • {(hot,sun), (hot,rain),(cold,sun), (cold,rain)} • note: no need to consider the probability mass function • Event 1(hot days): {(hot,sun), (hot,rain)} • Event 2 (sunny days): {(hot,sun), (cold,sun)}
Marginal probability of an event • Given • a probability space (Ω, p) • an event A • The marginal probability of A, denoted by p(A) is Σa∈Ap(a) • Example • p(sun) = 0.6 • p(hot) = 0.5
Joint probability • Given • a probability space • two events A and B • The joint probability of A and B is p(A and B) Sample space A and B B A
Conditional probability • p(A | B) = p(A and B) / p(B) Sample space A and B B A • p(A | B)p(B) = p(A and B) = p(B | A)P(A) • p(A | B) = p(B | A)p(A)/p(B) • Bayes’ rule
Example of conditional probability • A = hot days • B = sun days • p(A and B) = 0.4 • p(A|B)=0.4/0.6 • p(B|A)=0.4/0.5
Independent events • Given • a probability space • two events A and B • A and B are independent, if and only if • p(A and B) = p(A)p(B) • equivalently, p(A|B) = p(A) • Interpretation • Knowing that the state of the world is in B does not affect the probability that the state of the world is in A (vs. not in A) • Different from [A and B are disjoint]
The Monty Hall problem • Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice? • After the problem appeared in Parade, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine, most of them claiming vos Savant was wrong • Paul Erdos remained unconvinced until he was shown a computer simulation confirming the predicted result
As a probability problem • Sample space: • (door with the car, door opened by the host) • {(1,2), (1,3), (2,2), (2,3), (3,2), (3,3)} • Equal prob behind each door: p(1)=p(2)=p(3)=1/3 • Objective: compare p(1|host =2) and p(3|host =2), and p(1|host =3) and p(2|host =3) • Probability mass function? • option 1: all 1/6 • option 2: p(2,2)=p(3,3)=0, p(2,3)=p(3,2)=1/3 • Depends on the behavior of the host! • assuming he picks 2 and 3 uniformly at random, then no need to switch • assuming he always picks a door with a goat, then should switch
Random variables All we need to care in this course
Random Variables • A random variable is some aspect of the world about which we (may) have uncertainty • W = Is it raining? • D = How long will it take to drive to work? • L = Where am I ? • We denote random variables with capital letters • Like variables in a CSP, random variables have domains • W in {rain, sun} • D in [0,∞) • L in possible locations, maybe {(0,0),{0,1},…} • For now let us assume that a random variable is a variable with a domain • Random variables: capital letters, e.g. W, D, L • values: small letters, e.g. w, d, l
Joint Distributions • A joint distribution over a set of random variables: X1, X2, …, Xn specifies a real number for each assignment (or outcome) • p(X1 = x1, X2= x2, …, Xn= xn) • p(x1, x2, …, xn) • Size of distribution if n variables with domain sizes d? • Must obey: • For all but the smallest distributions, impractical to write out • This is a probability space • Sample space Ω: all combinations of values • probability mass function is p
Probabilistic Models • A probabilistic model is a joint distribution over a set of random variables • Probabilistic models: • (Random) variables with domains. • Assignments are called outcomes (atomic events) • Joint distributions: say whether assignments (outcomes) are likely • Normalized: sum to 1.0 Distribution over T,W
Events in probabilistic models • An event in a probabilistic model is a set E of outcomes • From a joint distribution, we can calculate the probability of any event • Probability that it’s hot AND sunny? • Probability that it’s hot? • Probability that it’s hot OR sunny? • Typically, the events we care about are partial assignments, like p(T=hot)
Marginal Distributions • Marginal distributions are sub-tables which eliminate variables • Marginalization (summing out): combine collapsed rows by adding W: sun rain 0.6 0.4 T: hot cold 0.5 0.5
Conditional Distributions • Conditional distributions are probability distributions over some variables given fixed values of others Conditional Distributions Joint Distributions
Normalization Trick • A trick to get a whole conditional distribution at once: • Select the joint probabilities matching the assignments (evidence) • Normalize the selection (make it sum to one) • Why does this work? Sum of selection is p(evidence)! (p(r), here) Select Normalize
Probabilistic Inference • Probabilistic inference: compute a desired probability from other known probabilities (e.g. conditional from joint) • We generally compute conditional probabilities • p(on time | no reported accidents) = 0.9 • These represent the agent’s beliefs given the evidence • Probabilities change with new evidence: • p(on time | no accidents, 5 a.m.) = 0.95 • p(on time | no accidents, 5 a.m., raining) = 0.80 • Observing new evidence causes beliefs to be updated
Inference by Enumeration • p(sun)? • p(sun | winter)? • p(sun | winter, warm)?
Inference by Enumeration • General case: • Evidence variables: • Query variable: • Hidden variable: • We want: • First, select the entries consistent with the evidence • Second, sum out H to get joint of query and evidence: • Finally, normalize the remaining entries • Obvious problems: • Worst-case time complexity • Space complexity to store the joint distribution All variables
The Product Rule • Sometimes have conditional distributions but want the joint • Example:
The Chain Rule • More generally, can always write any joint distribution as an incremental product of conditional distributions • Why is this always true?
Bayes’ Rule revisited • Two ways to factor a joint distribution over two variables: • Dividing, we get: • Why is this helpful? • Update belief (X) based on evidence y • when p(y|x) is much easier to compute Rev. Thomas Bayes(1702-1761)
Inference with Bayes’ Rule • Example: diagnostic probability from causal probability: • Example: • F is fire, {f, ¬f} • A is the alarm, {a, ¬a} • Note: posterior probability of fire still very small • Note: you should still run when hearing an alarm! Why? p(f)=0.01 p(a)=0.1 p(a|f)=0.9
Independence • Two variables are independent in a joint distribution if for all x,y, the events X=x and Y=y are independent: • The joint distribution factors into a product of two simple ones • Usually variables aren’t independent! • Can use independence as a modeling assumption • Independence can be a simplifying assumption • What could we assume for {Weather, Traffic, stock price}?
Mathematical definition of Random variables • Just for your curiosity • Mathematically, given a sample space Ω, a random variable is a function X: Ω→S • Sis the domain of X • for any s∈S, X-1(s) is an event • Example 1 • W: weather • SW = {rain, sun} • W-1(rain) = {(hot, rain), (cold, rain)} • W-1(sun) = {(hot, sun), (cold, sun)} • Example 2 • T: temperature • ST= {hot, cold} • T-1(hot) = {(hot, sun), (hot, rain)} • T-1(cold) = {(cold, sun), (cold, rain)}
What is probability, anyway? • Different philosophical positions: • Frequentism: numbers only come from repeated experiments • As we flip a coin lots of times, we see experimentally that it comes out heads ½ the time • Problem: “No man ever steps in the same river twice” • Probability that the Democratic candidate wins the next election? • Objectivism: probabilities are a real part of the universe • Maybe true at level of quantum mechanics • Most of us agree that the result of a coin flip is (usually) determined by initial conditions + classical mechanics • Subjectivism: probabilities merely reflect agents’ beliefs
Recap • Probability • which we still do not know • Random variable • which will be our main focus • Probabilistic inference • more in the next class (next Friday) • Reassigned project 1 due