1 / 28

Uncertainty II CS570 Lecture Note by Jin Hyung Kim Computer Science Department KAIST

Uncertainty II CS570 Lecture Note by Jin Hyung Kim Computer Science Department KAIST. Inference in Network. From Pr(X), compute Pr(X| e ) after observing evidence e Pr(X| e ) = Pr(X| e + , e - ) e + : evidence from its parent(s), causal support

keita
Download Presentation

Uncertainty II CS570 Lecture Note by Jin Hyung Kim Computer Science Department KAIST

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Uncertainty II CS570 Lecture Note by Jin Hyung Kim Computer Science Department KAIST

  2. Inference in Network • From Pr(X), compute Pr(X|e) after observing evidence e • Pr(X|e) = Pr(X| e+,e-) • e+ : evidence from its parent(s), causal support • e- : evidence from its children, diagnostic support • Boundary of Network • variables with either no parents or no children

  3. Inference in Network • Diagnostic inference • from effect to cause • Causal inference • from cause to effects • Intercausal inference • between causes of a common effect • explaining away • mixed inference • combining two or more of above

  4. Y1 Z X Y2 ... Yn Inference in Tree-structured Network

  5. Evidence Fusion • e= { Z=z, Y1=y1, …, Yn= Yn } • Pr(X|e) = Pr(X |e+, e-) = Pr(X |e+, e1-, e2-, ...en-) where and is obtained from ith child

  6. Evidence Propagation • Evidence propagation computation is local • depends only on its children and parent • Diagnostic support from X to its parent • Causal support from X to its kth child • Two recursive functions

  7. Z X Yn Y1 Yk

  8. Evidence Propagation Algorithm • Initialize causal and diagnostic support of all nodes • absence of evidence • Propagate up • until d-support is unchanged or instantiated root node • Propagate down • until c-support is unchanged or instantiated terminal node • Complexity of updating Pr(X|e) for all X  V is O(|V|)

  9. Extension of EPA to singly connected Network (1) • Can have more than one parent, but only one path between two nodes • Also called poly-tree • Handle c-support from more than one parent • CP Table is needed

  10. Zm Singly Connect Network Z1 X Yn Y1 Yk

  11. Extension of EPA to singly connected Network (2) • D-support to jth parent where • Message passing and Propagation • analogy to Neural network • complexity is linear to the network size • no recurrent computation

  12. Inference on Multiply connected Network • More than one path between two variables • Clustering Method • transform the network into probabilistically equivalent poly-tree • Conditioning Method • transformation by instantiating variables to definite values • Stochastic simulation • generate large number of concrete models that are consistent with network distribution • Likelihood-weighting • a variation of stochastic simulation for speed-up

  13. Clustering Method • Transform into poly-tree by generating meganode • sprinker and rain  sprinker+rain • possible value from sprinker x rain • Worst case : CP Table can be exponential A A C B B+C D D

  14. Cutset Conditioning Method • Transform several simpler poly-tree • one or more variable instantiated to a definite value • P(X|E) is computed as a weighted average over the value • Cutset : set of variables that can be instantiated to yield poly-tree • Number of poly-tree can be exponential • find small cutset as possible • evaluate most likely polytrees first • decreasing order of likelyhood

  15. Stochastic Simulation • Monte Carlo method • choose a value of for each root node, weighting by prior probability - MC method • Dean’s book page 383-384 • choose values of descendant variables randomly using conditional probability • relative frequency is the probability • when the trial repeat goes infinity, it will converge to real number • Too much time required • especially for small probability event

  16. Likelihood-Weighting • Each sample has same weight, ratio is used to compute probability • Instead of randomly choosing a value, take the value with conditional probability as likelihood weight • Dean’s book page 385 • Much faster than simple Stochastic simulation, but still slow for small probability values

  17. Probabilistic Reasoning in Medicine • Simple Diagnosis : Symptoms and disease • Find most likely disease given evidence • find h maximizing Pr(H=h|e) • Assumption • one disease at a time • symptoms are independent given disease • In practice, neither of …. • Simple But impressive performance • acute abdominal pain such as appendicitis • de Dombal’s system: 90% of accuracy • expert physicians: average 65% - 80%

  18. More complicated model • Quick Medical Reference(QMR) • knowledge-base and diagnosis of internal medicine • Figure 8.11 • 600 disease and 4000 findings, 40,000 edges • exact algorithm is impractical • many parent, not boolean • one success by stochastic sampling

  19. Noisy-OR relationship • Boolean-OR : (H1 H2 H3 H4)  F • In the stochastic case, Noise-OR where

  20. Noisy-OR Example • P(fever|cold) = 0.4, P(fever|flu) = 0.8, P(fever|malaria) = 0.9 cold flu malaria P(fever) P(fever) F F F 0.0 1.0 F F T 0.9 0.1 F T F 0.8 0.2 F T T 0.98 0.02 = 0.2 x 0.1 T F F 0.4 0.6 T F T 0.94 0.06 = 0.6 x 0.1 T T F 0.88 0.12 = 0.6 x 0.2 T T T 0.988 0.012 = 0.6 x 0.2 x 0.1

  21. Decision Theory • Preference as utility function, U(s), of state s • Expected Utility of action A given evidence E • Principle of Maximum Expected Utility • rational agent behavior • prescription - ought to do • One shot decision vs sequential decision

  22. Rational Preference • Orderbility : (A>B)  (B>A)  (A~B) • Transitivity : (A>B) (B>C)  (A>C) • Continuity : (A>B>C)  p[p,A; 1-p,C]~B • Substitutability : (A~B)  [p,A;1-p,C] ~ [p,B;1-p,C] • Monotonicity : (A>B)  [pq [p,A;1-p,B] >~ [q,A;1-q,B] • Decomposability [p,A;1-p,[q,B;1-q,C] ~ [p,A;(1-p)q,B;(1-p)(1-q),C] • Utility Principle: • U(A) > U(B)  A>B • U(A) = U(B)  A=B

  23. The Utility of Money • Monotonic preference • (1,$1M) ? (0.5, $3M; 0.5, $0) • Expected Dollar vs Expected Utility • Bernoulli’s St. Petersburg paradox • if first head appears on the nth trial, you win 2n dollars • logarithmic function • risk aversion • insurance • low of large number +U +$

  24. Game Analysis • Game against rational opponent • Evaluation (utility) function • minimax strategy • most likely response • maximum expected value strategy • expectiminimax • Game Searching • objective : find best move • minimax algorithm • alpha-beta algorithm

  25. Decision Analysis • decision structuring • decision problems represented as • situation-action pairs • decision tree • goal-oriented AND-OR graph • game against neutral god

  26. Decision Tree Method • Decision node • choice of action for the decision maker • Chance node • neutral God’s choice • Choose the action of Maximum Expected Value(MEV)

  27. Value of Information • Value of decision analysis is to know what to ask • gain of expected utility using the information • V(Infor) = (MEV w/ Infor) - (MEV w/o infor) • V(infor)  0 always • example

  28. Automated decision making in Medicine • choose test considering risk and cost • expert system that maximize expected utility of patient • generate recommendation, not conclusion • legal responsibility

More Related