220 likes | 360 Views
Mean Field Inference in Dependency Networks: An Empirical Study. Daniel Lowd and Arash Shamaei University of Oregon. Learning and Inference in Graphical Models. We want to learn a probability distribution from data and use it to answer queries.
E N D
Mean Field Inference in Dependency Networks: An Empirical Study Daniel Lowd and ArashShamaei University of Oregon
Learning and Inference inGraphical Models We want to learn a probability distribution from data and use it to answer queries. Applications: medical diagnosis, fault diagnosis, web usage analysis, bioinformatics, collaborative filtering, etc. Data Model Answers! A B C Learning Inference
One-Slide Summary • In dependency networks, mean field inference is faster than Gibbs sampling, with similar accuracy. • Dependency networks are competitive with Bayesian networks. Data Model Answers! A B C Learning Inference
Outline • Graphical models:Dependency networks vs. others • Representation • Learning • Inference • Mean field inference in dependency networks • Experiments
Dependency Networks [Heckerman et al., 2000] Represents a probability distribution over {X1, …, Xn} as a set of conditional probability distributions. Example: X1 X2 X3
B=? true false <0.2, 0.8> C=? true false <0.5, 0.5> <0.7, 0.3> Learning Dependency Networks [Heckerman et al., 2000] For each variable Xi, learn conditional distribution, C=? true false <0.7, 0.3> <0.4, 0.6>
Approximate Inference Methods • Gibbs sampling: Slow but effective • Mean field: Fast and usually accurate • Belief propagation: Fast and usually accurate Model Answers! A B C
Gibbs Sampling Resample each variable in turn, given its neighbors: Use set of samples to answer queries.e.g., Converges to true distribution, given enough samples (assuming positive distribution). Previously, the only method used to compute probabilities in DNs.
Mean Field Approximate P with simpler distribution Q: To find best Q, optimize reverse K-L divergence: Mean field updates converge to local optimum: Works for DNs! Never before tested!
Mean Field in Dependency Networks • Initialize each Q(Xi) to a uniform distribution. • Update each Q(Xi) in turn: • Stop when marginalsQ(Xi) converge. If consistent, this is guaranteed to converge. If inconsistent, this always seems to converge in practice.
Empirical Questions Q1. In DNs, how does MF compare to Gibbs sampling in speed and accuracy? Q2. How do DNs compare to BNs in inference speed and accuracy?
Experiments • Learned DNs and BNs on 12 datasets • Generated queries from test data • Varied evidence variables from 10% to 90% • Score using average CMLL per variable(conditional marginal log-likelihood):
Results: Accuracy in DNs Negative CMLL
MF vs. Gibbs in DNs,run for equal time In DNs, MF usually more accurate, given equal time.
Gibbs: DN vs. BN With more evidence, DNs are more accurate.
Experimental Results Q1. In DNs, how does MF compare to Gibbs sampling in speed and accuracy? A1. MF is consistently faster with similar accuracy, or more accurate with similar speed. Q2. How do DNs compare to BNs in inference speed and accuracy? A2.DNs are competitive with BNs – better with more evidence, worse with less evidence.
Conclusion • MF inference in DNs is fast and accurate, especially with more evidence. • Future work: • Relational dependency networks (Neville & Jensen, 2007) • More powerful approximations Source code available: http://libra.cs.uoregon.edu/
Learned Models Learning time is comparable. DNs usually have higher pseudo-likelihood (PLL) DNs sometimes have higher log-likelihood (LL)