1 / 59

Probabilistic Inference

Probabilistic Inference. Agenda. Random variables Bayes rule Intro to Bayesian Networks. Cheat Sheet: Probability Distributions. Event: boolean propositions that may or may not be true in a state of the world For any event x: P(x) 0 ( axiom ) P(x) =1-P(x) For any events x,y :

dard
Download Presentation

Probabilistic Inference

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Probabilistic Inference

  2. Agenda • Random variables • Bayes rule • Intro to Bayesian Networks

  3. Cheat Sheet: Probability Distributions • Event: boolean propositions that may or may not be true in a state of the world • For any event x: • P(x)0 (axiom) • P(x)=1-P(x) • For any events x,y: • P(xy) = P(yx) (symmetry) • P(xy) = P(yx) (symmetry) • P(xy) = P(x)+P(y)-P(xy) (axiom) • P(x) = P(xy)+P(xy) (marginalization) • x and y are independent iffP(xy)=P(x)P(y). This is an explicit assumption

  4. Cheat Sheet: conditional distributions • P(x|y) reads “probability of x given y” • P(x) is equivalent to P(x|true) • P(x|y)  P(y|x) • P(x|y) = P(xy)/P(y) (definition) • P(x|yz) = P(xy|z)/P(y|z) (definition) • P(x|y)=1-P(x|y). i.e., P(•|y) is a probability distribution

  5. Random Variables

  6. Random Variables • In a possible world, a random variable X can take on one of a set of values Val(X)={x1,…,xn} • Such an event is written ‘X=x’ • Capital: random variable • Lowercase: assignment of variable to value • Truth assignments to boolean random variables may also be expressed as ‘X’ or ‘X’

  7. Notation with Random Variables • Capital letters A,B,C denote random variables • Each random variable X can take one of a set of possible values xVal(X) • Boolean random variable has Val(X)={True,False} • Although the most unambiguous way of writing a probabilistic belief is over an event… • P(X=x) = a number • P(X=x  Y=y) = a number • …it is tedious to list a large number of statements that hold for multiple values x and y • Random variables allow using a shorthand notation. (Unfortunately this is a source of a lot of initial confusion!)

  8. Decoding Probability Notation • Mental rule #1: Lowercase: assignments are often left implicit when unambiguous • P(a) = P(A=a) = a number

  9. Decoding Probability Notation (Boolean variables) • P(X=True) is written P(X) • P(X=False) is written P(X) • [Since P(X) = 1-P(X), knowing P(X) is enough to specify the whole distribution over X=True or X=False]

  10. Decoding Probability Notation • Mental rule #2: Drop the AND, use commas • P(a,b) = P(ab) = P(A=aB=b) = a number

  11. Decoding Probability Notation • Mental rule #3: Uppercase => values left implicit • Suppose Val(X) = {1,2,3} • When I write P(X), it states “the distribution defined over all of P(X=1), P(X=2), P(X=3)” • It is not a single number, but rather a set of numbers • P(X) = [A probability table]

  12. Decoding Probability Notation • P(A,B) = [P(A=a  B=b) for all combinations of aVal(A), bVal(B)] • A probability table with |Val(A)|x|Val(B)| entries

  13. Decoding Probability Notation • Mental rule #3: Uppercase => values left implicit • So when you see f(A,B)=g(A,B) this means: • “f(a,b) = g(a,b) for all values of aVal(A) and bVal(B)” • f(A,B)=g(A) means: • “f(a,b) = g(a) for all values of aVal(A) and bVal(B)” • f(A,b)=g(A,b) means: • “f(a,b) = g(a,b) for all values of aVal(A)” • Order doesn’t matter. P(A,B) is equivalent to P(B,A)

  14. Another Mnemonic: Functional Equalities • P(X) is treated as a function over a variable X • Operations and relations are on “function objects” • If you say f(x) = g(x) without a value of x, then you can infer f(x) = g(x) holds for all x • Likewise if you say f(x,y) = g(x) without stating a value of x or y, then you can infer f(x,y) = g(x) holds for all x,y

  15. Quiz: What does this mean? • P(AB) = P(A)+P(B)- P(AB) • P(A=a  B=b) = P(A=a) + P(B=b) • P(A=a  B=b) • For all aVal(A) and bVal(B)

  16. Marginalization • If X, Y are boolean random variables that describe the state of the world, then • This generalizes to multiple variables • ++ • Etc.

  17. Marginalization • If X, Y are random variables: • This generalizes to multiple variables • Etc.

  18. Decoding Probability Notation (Marginalization) • Mental rule #4: domains are usually implicit • Suppose a belief state P(X,Y,Z) is defined over X, Y, and Z • If I write P(X), I am implicitly marginalizing over Y and Z • P(X) = SySzP(X,y,z) • P(X) = SySzP(X  Y=y  Z=z) • P(X=x) = SySzP(X=x  Y=y  Z=z) for all x • By convention, each of y and z are summed over Val(Y), Val(Z) (should be interpreted as) (should be interpreted as)

  19. Conditional Probability for Random Variables • P(A|B) is the posterior probability of A given knowledge of B • “For each bVal(B): given that I know B=b, what would I believe is the distribution over A?” • If a new piece of information C arrives, the agent’s new belief (if it obeys the rules of probability) should beP(A|B,C)

  20. Conditional Probability for Random Variables • P(A,B) = P(A|B) P(B) = P(B|A) P(A)P(A|B) is the posterior probability of A given knowledge of B • Axiomatic definition: P(A|B) = P(A,B)/P(B)

  21. Conditional Probability • P(A,B) = P(A|B) P(B) = P(B|A) P(A) • P(A,B,C) = P(A|B,C) P(B,C) = P(A|B,C) P(B|C) P(C) • P(Cavity) = StSp P(Cavity,t,p) = StSpP(Cavity|t,p) P(t,p) = StSp P(Cavity|t,p) P(t|p) P(p)

  22. Independence • Two random variables A and B are independent if P(A,B) = P(A) P(B) hence P(A|B) = P(A) • Knowing B doesn’t give you any information about A • [This equality has to hold for all combinations of values that A,B can take on]

  23. Remember: Probability Notation Leaves Values Implicit • P(AB) = P(A)+P(B)- P(AB) means • P(A=a  B=b) = P(A=a) + P(B=b) • P(A=a  B=b) • For all aVal(A) and bVal(B) A and B are random variables.A=a and B=b are events. Random variables indicate many possible combinations of events

  24. Conditional Probability • P(A,B) = P(A|B) P(B) = P(B|A) P(A)P(A|B) is the posterior probability of A given knowledge of B • Axiomatic definition: P(A|B) = P(A,B)/P(B)

  25. Probabilistic Inference

  26. Conditional Distributions P(Cavity|Toothache) = P(CavityToothache)/P(Toothache) = (0.108+0.012)/(0.108+0.012+0.016+0.064) = 0.6 Interpretation: After observing Toothache, the patient is no longer an “average” one, and the prior probability (0.2) of Cavity is no longer valid P(Cavity|Toothache) is calculated by keeping the ratios of the probabilities of the 4 cases of Toothache unchanged, and normalizing their sum to 1

  27. Updating the Belief State • The patient walks into the dentists door • Let D now observe evidence E: Toothache holds with probability 0.8 (e.g., “the patient says so”) • How should D update its belief state?

  28. Updating the Belief State • P(Toothache|E) = 0.8 • We want to compute P(C,T,P|E)= P(C,P|T,E) P(T|E) • Since E is not directly related to the cavity or the probe catch, we consider that C and P are independent of E given T, hence:P(C,P|T,E) = P(CP|T) • P(C,T,P|E)= P(C,P,T) P(T|E)/P(T)

  29. Updating the Belief State • P(Toothache|E) = 0.8 • We want to compute P(CTP|E)= P(CP|T,E) P(T|E) • Since E is not directly related to the cavity or the probe catch, we consider that C and P are independent of E given T, hence:P(CP|T,E) = P(CP|T) • P(CTP|E)= P(C,P,T) P(T|E)/P(T) These rows should be scaled to sum to 0.8 These rows should be scaled to sum to 0.2

  30. Updating the Belief State • P(Toothache|E) = 0.8 • We want to compute P(CTP|E)= P(CP|T,E) P(T|E) • Since E is not directly related to the cavity or the probe catch, we consider that C and P are independent of E given T, hence:P(CP|T,E) = P(CP|T) • P(CTP|E)= P(C,P,T) P(T|E)/P(T) These rows should be scaled to sum to 0.8 These rows should be scaled to sum to 0.2

  31. Issues • If a state is described by n propositions, then a belief state contains 2n states (possibly, some have probability 0) •  Modeling difficulty: many numbers must be entered in the first place •  Computational issue: memory size and time

  32. Independence of events • Two events A=a and B=b are independent if P(A=a  B=b) = P(A=a) P(B=b) hence P(A=a|B=b) = P(A=a) • Knowing B=b doesn’t give you any information about whether A=a is true

  33. Independence of random variables • Two random variables A and B are independent if P(A,B) = P(A) P(B) hence P(A|B) = P(A) • Knowing B doesn’t give you any information about A • [This equality has to hold for all combinations of values that A and B can take on, i.e., all events A=a and B=b are independent]

  34. Significance of independence • If A and B are independent, then P(A,B) = P(A) P(B) • => The joint distribution over A and B can be defined as a product of the distribution of A and the distribution of B • Rather than storing a big probability table over all combinations of A and B, store two much smaller probability tables! • To compute P(A=a  B=b), just look up P(A=a) and P(B=b) in the individual tables and multiply them together

  35. Conditional Independence • Two random variables A and B are conditionally independent given C, if P(A, B|C) = P(A|C) P(B|C)hence P(A|B,C) = P(A|C) • Once you know C, learning B doesn’t give you any information about A • [again, this has to hold for all combinations of values that A,B,C can take on]

  36. Significance of Conditional independence • Consider Rainy, Thunder, and RoadsSlippery • Ostensibly, thunder doesn’t have anything directly to do with slippery roads… • But they happen together more often when it rains, so they are not independent… • So it is reasonable to believe that Thunder and RoadsSlippery are conditionally independent given Rainy • So if I want to estimate whether or not I will hear thunder, I don’t need to think about the state of the roads, just whether or not it’s raining!

  37. Toothache and PCatch are independent given Cavity, but this relation is hidden in the numbers! [Quiz] • Bayesian networks explicitly represent independence among propositions to reduce the number of probabilities defining a belief state

  38. Bayesian Network • Notice that Cavity is the “cause” of both Toothache and PCatch, and represent the causality links explicitly • Give the prior probability distribution of Cavity • Give the conditional probability tables of Toothache and PCatch P(C,T,P) = P(T,P|C) P(C) = P(T|C) P(P|C) P(C) Cavity Toothache PCatch 5 probabilities, instead of 7

  39. Conditional Probability Tables P(C,T,P) = P(T,P|C) P(C) = P(T|C) P(P|C) P(C) Cavity Toothache PCatch Columns sum to 1 If X takes n values, just store n-1 entries

  40. Significance of Conditional independence • Consider Grade(CS101), Intelligence, and SAT • Ostensibly, the grade in a course doesn’t have a direct relationship with SAT scores • but good students are more likely to get good SAT scores, so they are not independent… • It is reasonable to believe that Grade(CS101) and SAT are conditionally independent given Intelligence

  41. bayesian Network • Explicitly represent independence among propositions • Notice that Intelligence is the “cause” of both Grade and SAT, and the causality is represented explicitly P(I,G,S) = P(G,S|I) P(I) = P(G|I) P(S|I) P(I) Intel. Grade SAT 7probabilities, instead of 11

  42. Significance of Bayesian Networks • If we know that variables are conditionally independent, we should be able to decompose joint distribution to take advantage of it • Bayesian networks are a way of efficiently factoring the joint distribution into conditional probabilities • And also building complex joint distributions from smaller models of probabilistic relationships • But… • What knowledge does the BN encode about the distribution? • How do we use a BN to compute probabilities of variables that we are interested in?

  43. Burglary Earthquake causes Alarm effects JohnCalls MaryCalls A More Complex BN Intuitive meaning of arc from x to y: “x has direct influence on y” Directed acyclic graph

  44. Burglary Earthquake Alarm JohnCalls MaryCalls A More Complex BN Size of the CPT for a node with k parents: 2k 10 probabilities, instead of 31

  45. Burglary Earthquake Alarm JohnCalls MaryCalls What does the BN encode? • Each of the beliefs JohnCalls and MaryCalls is independent of Burglary and Earthquake given Alarm or Alarm P(BJ) P(B) P(J) P(BJ|A) =P(B|A) P(J|A) For example, John doesnot observe any burglariesdirectly

  46. Burglary Earthquake Alarm JohnCalls MaryCalls What does the BN encode? • The beliefs JohnCalls and MaryCalls are independent given Alarm or Alarm P(BJ|A) =P(B|A) P(J|A) P(JM|A) =P(J|A) P(M|A) A node is independent of its non-descendants given its parents For instance, the reasons why John and Mary may not call if there is an alarm are unrelated

  47. Burglary Earthquake Alarm JohnCalls MaryCalls What does the BN encode? Burglary and Earthquake are independent A node is independent of its non-descendants given its parents • The beliefs JohnCalls and MaryCalls are independent given Alarm or Alarm For instance, the reasons why John and Mary may not call if there is an alarm are unrelated

  48. Locally Structured World • A world is locally structured (or sparse) if each of its components interacts directly with relatively few other components • In a sparse world, the CPTs are small and the BN contains much fewer probabilities than the full joint distribution • If the # of entries in each CPT is bounded by a constant, i.e., O(1), then the # of probabilities in a BN is linear in n – the # of propositions – instead of 2n for the joint distribution

  49. Equations Involving Random Variables Give Rise to Causality Relationships • C = A  B • C = max(A,B) • Constrains joint probability P(A,B,C) • Nicely encoded as causality relationship A B Conditional probability given by equation rather than a CPT C

  50. Naïve Bayes Models • P(Cause,Effect1,…,Effectn)= P(Cause) Pi P(Effecti | Cause) Cause Effect1 Effect2 Effectn

More Related