1 / 67

Unification of Logic and Probability: Markov Logic Overview

Explore how Markov Logic Networks unify complex world representation and uncertain probability models, enabling powerful inference methods.

taylorjohn
Download Presentation

Unification of Logic and Probability: Markov Logic Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Markov Logic:A Simple and Powerful UnificationOf Logic and Probability Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint work with Stanley Kok, Daniel Lowd,Hoifung Poon, Matt Richardson, Parag Singla,Marc Sumner, and Jue Wang

  2. Overview • Motivation • Background • Representation • Inference • Learning • Software • Applications • Discussion

  3. Motivation • The real world is complex and uncertain • First-order logic handles complexity • Probability handles uncertainty • We need to unify the two

  4. Overview • Motivation • Background • Representation • Inference • Learning • Software • Applications • Discussion

  5. Markov Networks Smoking Cancer • Undirected graphical models Asthma Cough • Potential functions defined over cliques

  6. Markov Networks Smoking Cancer • Undirected graphical models Asthma Cough • Log-linear model: Weight of Feature i Feature i

  7. First-Order Logic • Constants, variables, functions, predicatesE.g.: Anna, X, mother_of(X), friends(X, Y) • Grounding: Replace all variables by constantsE.g.: friends (Anna, Bob) • World (model, interpretation):Assignment of truth values to all ground predicates

  8. Overview • Motivation • Background • Representation • Inference • Learning • Software • Applications • Discussion

  9. Representations

  10. Markov Logic: Intuition • A logical KB is a set of hard constraintson the set of possible worlds • Let’s make them soft constraints:When a world violates a formula,It becomes less probable, not impossible • Give each formula a weight(Higher weight  Stronger constraint)

  11. Markov Logic: Definition • A Markov Logic Network (MLN) is a set of pairs (F, w) where • F is a formula in first-order logic • w is a real number • Together with a set of constants,it defines a Markov network with • One node for each grounding of each predicate in the MLN • One feature for each grounding of each formula F in the MLN, with the corresponding weight w

  12. Example: Friends & Smokers

  13. Example: Friends & Smokers

  14. Example: Friends & Smokers

  15. Example: Friends & Smokers Two constants: Anna (A) and Bob (B)

  16. Example: Friends & Smokers Two constants: Anna (A) and Bob (B) Smokes(A) Smokes(B) Cancer(A) Cancer(B)

  17. Example: Friends & Smokers Two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A)

  18. Example: Friends & Smokers Two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A)

  19. Example: Friends & Smokers Two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A)

  20. Markov Logic Networks • MLN is template for ground Markov nets • Probability of a world x: • Typed variables and constants greatly reduce size of ground Markov net • Functions, existential quantifiers, etc. • Infinite and continuous domains Weight of formula i No. of true groundings of formula i in x

  21. Special cases: Markov networks Markov random fields Bayesian networks Log-linear models Exponential models Max. entropy models Gibbs distributions Boltzmann machines Logistic regression Hidden Markov models Conditional random fields Obtained by making all predicates zero-arity Markov logic allows objects to be interdependent (non-i.i.d.) Relation to Statistical Models

  22. Relation to First-Order Logic • Infinite weights  First-order logic • Satisfiable KB, positive weights Satisfying assignments = Modes of distribution • Markov logic allows contradictions between formulas

  23. Overview • Motivation • Background • Representation • Inference • Learning • Software • Applications • Discussion

  24. Inferring the Most Probable Explanation • Problem: Find most likely state of world given evidence Query Evidence

  25. Inferring the Most Probable Explanation • Problem: Find most likely state of world given evidence

  26. Inferring the Most Probable Explanation • Problem: Find most likely state of world given evidence

  27. Inferring the Most Probable Explanation • Problem: Find most likely state of world given evidence • This is just the weighted MaxSAT problem • Use weighted SAT solver(e.g., MaxWalkSAT [Kautz et al., 1997]) • Potentially faster than logical inference (!)

  28. The WalkSAT Algorithm fori← 1 to max-triesdo solution = random truth assignment for j ← 1 tomax-flipsdo if all clauses satisfied then return solution c ← random unsatisfied clause with probabilityp flip a random variable in c else flip variable in c that maximizes number of satisfied clauses return failure

  29. The MaxWalkSAT Algorithm fori← 1 to max-triesdo solution = random truth assignment for j ← 1 tomax-flipsdo if ∑ weights(sat. clauses) > thresholdthen return solution c ← random unsatisfied clause with probabilityp flip a random variable in c else flip variable in c that maximizes ∑ weights(sat. clauses) return failure, best solution found

  30. But … Memory Explosion • Problem:If there are n constantsand the highest clause arity is c,the ground network requires O(n ) memory • Solution:Exploit sparseness; ground clauses lazily→ LazySAT algorithm [Singla & Domingos, 2006] c

  31. Computing Probabilities • P(Formula|MLN,C) = ? • MCMC: Sample worlds, check formula holds • P(Formula1|Formula2,MLN,C) = ? • If Formula2 = Conjunction of ground atoms • First construct min subset of network necessary to answer query (generalization of KBMC) • Then apply MCMC (or other) • Can also do lifted inference [Braz et al, 2005]

  32. MCMC: Gibbs Sampling state← random truth assignment fori← 1 tonum-samples do for each variable x sample x according to P(x|neighbors(x)) state←state with new value of x P(F) ← fraction of states in which F is true

  33. But … Insufficient for Logic • Problem:Deterministic dependencies break MCMCNear-deterministic ones make it veryslow • Solution:Combine MCMC and WalkSAT→MC-SAT algorithm [Poon & Domingos, 2006]

  34. Overview • Motivation • Background • Representation • Inference • Learning • Software • Applications • Discussion

  35. Learning • Data is a relational database • Closed world assumption (if not: EM) • Learning parameters (weights) • Generatively • Discriminatively • Learning structure (formulas)

  36. Generative Weight Learning • Maximize likelihood • Use gradient ascent or L-BFGS • No local maxima • Requires inference at each step (slow!) No. of true groundings of clause i in data Expected no. true groundings according to model

  37. Pseudo-Likelihood • Likelihood of each variable given its neighbors in the data • Does not require inference at each step • Consistent estimator • Widely used in vision, spatial statistics, etc. • But PL parameters may not work well forlong inference chains

  38. Discriminative Weight Learning • Maximize conditional likelihood of query (y) given evidence (x) • Expected counts ≈ Counts in most prob. state of y given x, found by MaxWalkSAT No. of true groundings of clause i in data Expected no. true groundings according to model

  39. Structure Learning • Generalizes feature induction in Markov nets • Any inductive logic programming approach can be used, but . . . • Goal is to induce any clauses, not just Horn • Evaluation function should be likelihood • Requires learning weights for each candidate • Turns out not to be bottleneck • Bottleneck is counting clause groundings • Solution: Subsampling

  40. Structure Learning • Initial state: Unit clauses or hand-coded KB • Operators: Add/remove literal, flip sign • Evaluation function:Pseudo-likelihood + Structure prior • Search: Beam search, shortest-first search

  41. Overview • Motivation • Background • Representation • Inference • Learning • Software • Applications • Discussion

  42. Alchemy Open-source software including: • Full first-order logic syntax • Generative & discriminative weight learning • Structure learning • Weighted satisfiability and MCMC • Programming language features www.cs.washington.edu/ai/alchemy

  43. Overview • Motivation • Background • Representation • Inference • Learning • Software • Applications • Discussion

  44. Information extraction Entity resolution Link prediction Collective classification Web mining Natural language processing Computational biology Social network analysis Robot mapping Activity recognition Probabilistic Cyc CALO Etc. Applications

  45. Information Extraction Parag Singla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains”(AAAI-06). Singla, P., & Domingos, P. (2006). Memory-efficent inference in relatonal domains. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (pp. 500-505). Boston, MA: AAAI Press. H. Poon & P. Domingos, Sound and Efficient Inference with Probabilistic and Deterministic Dependencies”, in Proc. AAAI-06, Boston, MA, 2006. P. Hoifung (2006). Efficent inference. In Proceedings of the Twenty-First National Conference on Artificial Intelligence.

  46. Segmentation Author Title Venue Parag Singla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains”(AAAI-06). Singla, P., & Domingos, P. (2006). Memory-efficent inference in relatonal domains. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (pp. 500-505). Boston, MA: AAAI Press. H. Poon & P. Domingos, Sound and Efficient Inference with Probabilistic and Deterministic Dependencies”, in Proc. AAAI-06, Boston, MA, 2006. P. Hoifung (2006). Efficent inference. In Proceedings of the Twenty-First National Conference on Artificial Intelligence.

  47. Entity Resolution Parag Singla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains”(AAAI-06). Singla, P., & Domingos, P. (2006). Memory-efficent inference in relatonal domains. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (pp. 500-505). Boston, MA: AAAI Press. H. Poon & P. Domingos, Sound and Efficient Inference with Probabilistic and Deterministic Dependencies”, in Proc. AAAI-06, Boston, MA, 2006. P. Hoifung (2006). Efficent inference. In Proceedings of the Twenty-First National Conference on Artificial Intelligence.

  48. Entity Resolution Parag Singla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains”(AAAI-06). Singla, P., & Domingos, P. (2006). Memory-efficent inference in relatonal domains. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (pp. 500-505). Boston, MA: AAAI Press. H. Poon & P. Domingos, Sound and Efficient Inference with Probabilistic and Deterministic Dependencies”, in Proc. AAAI-06, Boston, MA, 2006. P. Hoifung (2006). Efficent inference. In Proceedings of the Twenty-First National Conference on Artificial Intelligence.

  49. State of the Art • Segmentation • HMM (or CRF) to assign each token to a field • Entity resolution • Logistic regression to predict same field/citation • Transitive closure • Alchemy implementation: Seven formulas

More Related