1 / 39

Markov Logic networks

A brief introduction to. Markov Logic networks. Markov Networks. Undirected graphical models. Smoking. Cancer. Asthma. Cough. Potential functions defined over cliques. Markov Networks. Undirected graphical models. Smoking. Cancer. Asthma. Cough. Log-linear model:.

mvickery
Download Presentation

Markov Logic networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A brief introduction to Markov Logic networks De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  2. Markov Networks • Undirected graphical models Smoking Cancer Asthma Cough • Potential functions defined over cliques De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  3. Markov Networks • Undirected graphical models Smoking Cancer Asthma Cough • Log-linear model: Weight of Feature i Feature i De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  4. Markov Logic: Intuition • A logical KB is a set of hard constraintson the set of possible worlds • Let’s make them soft constraints:When a world violates a formula,it becomes less probable, not impossible • Give each formula a weight(Higher weight  Stronger constraint) De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  5. Markov Logic • A Markov Logic Network (MLN) is a set of pairs (F, w) where • F is a formula in first-order logic • w is a real number • An MLN defines a Markov network with • One node for each grounding of each predicatein the MLN • One feature for each grounding of each formula F in the MLN, with the corresponding weight w De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  6. Markov Logic Networks • Probability of a world x: • Typed variables and constants greatly reduce size of ground Markov net • Functions, existential quantifiers, etc. • Infinite and continuous domains Weight of formula i No. of true groundings of formula i in x De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  7. A possible worlds view Say we have two domain elements Anna and Bob as well as two predicates Friends and Happy De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  8. A possible worlds view Logical formulas such as IF Friends(Anna,Bob) THEN Happy(Bob) exclude possible worlds De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  9. A possible worlds view Instead of excluding worlds, we want them to become less likely, e.g. De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  10. A possible worlds view This is achieved with the following potentials De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  11. An possible worlds view Or as log-linear model this is: De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI This can also be viewed as building a graphical model

  12. A graphical example : Friends & Smokers De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  13. A graphical example : Friends & Smokers De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  14. A graphical example : Friends & Smokers De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  15. A graphical example : Friends & Smokers Two constants: Anna (A) and Bob (B) De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  16. A graphical example : Friends & Smokers Two constants: Anna (A) and Bob (B) Smokes(A) Smokes(B) Cancer(A) Cancer(B) De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  17. A graphical example : Friends & Smokers Two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A) De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  18. A graphical example : Friends & Smokers Two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A) De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  19. A graphical example : Friends & Smokers Two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A) De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI Alternatively, this can be viewed as a set of weighted ground clauses

  20. A graphical example : Friends & SmokersCan be combined with ideas from graphical models and logic programming to answer queries “efficiently” Two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) query Cancer(A) Cancer(B) Friends(B,A) P( Cancer(B) | Smokes(A),Friends(A,B),Friends(B,A)) De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  21. A graphical example : Friends & SmokersCan be combined with ideas from graphical models and logic programming to answer queries “efficiently” Two constants: Anna (A) and Bob (B) Friends(A,B) Follow the clauses Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A) P( Cancer(B) | Smokes(A),Friends(A,B),Friends(B,A)) De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  22. A graphical example : Friends & SmokersCan be combined with ideas from graphical models and logic programming to answer queries “efficiently” Two constants: Anna (A) and Bob (B) Friends(A,B) Follow the clauses Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A) P( Cancer(B) | Smokes(A),Friends(A,B),Friends(B,A)) De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  23. A graphical example : Friends & SmokersCan be combined with ideas from graphical models and logic programming to answer queries “efficiently” Two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A) P( Cancer(B) | Smokes(A),Friends(A,B),Friends(B,A)) De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  24. A graphical example : Friends & SmokersCan be combined with ideas from graphical models and logic programming to answer queries “efficiently” Two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A) P( Cancer(B) | Smokes(A),Friends(A,B),Friends(B,A)) De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI We may not have to consider the complete induced Markov networks

  25. Special cases: Markov networks Markov random fields Bayesian networks Log-linear models Exponential models Max. entropy models Gibbs distributions Boltzmann machines Logistic regression Hidden Markov models Conditional random fields Obtained by making all predicates zero-arity Markov logic allows objects to be interdependent (non-i.i.d.) Relation to Statistical Models De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  26. Relation to First-Order Logic • Infinite weights  First-order logic (without function symbolcs) • Satisfiable KB, positive weights Satisfying assignments = Modes of distribution • Markov logic allows contradictions between formulas De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  27. Applications Natural language processing, Robotics, Collective Classification, Social Networks, Activity Recognition, … De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  28. Information Extraction ParagSingla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains”(AAAI-06). Singla, P., & Domingos, P. (2006). Memory-efficent inference in relatonal domains. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (pp. 500-505). Boston, MA: AAAI Press. H. Poon & P. Domingos, Sound and Efficient Inference with Probabilistic and Deterministic Dependencies”, in Proc. AAAI-06, Boston, MA, 2006. P. Hoifung (2006). Efficent inference. In Proceedings of the Twenty-First National Conference on Artificial Intelligence. De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  29. Author Segmentation Title Venue ParagSingla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains”(AAAI-06). Singla, P., & Domingos, P. (2006). Memory-efficent inference in relatonal domains. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (pp. 500-505). Boston, MA: AAAI Press. H. Poon & P. Domingos, Sound and Efficient Inference with Probabilistic and Deterministic Dependencies”, in Proc. AAAI-06, Boston, MA, 2006. P. Hoifung (2006). Efficent inference. In Proceedings of the Twenty-First National Conference on Artificial Intelligence. De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  30. Entity Resolution ParagSingla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains”(AAAI-06). Singla, P., & Domingos, P. (2006). Memory-efficent inference in relatonal domains. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (pp. 500-505). Boston, MA: AAAI Press. H. Poon & P. Domingos, Sound and Efficient Inference with Probabilistic and Deterministic Dependencies”, in Proc. AAAI-06, Boston, MA, 2006. P. Hoifung (2006). Efficent inference. In Proceedings of the Twenty-First National Conference on Artificial Intelligence. De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  31. Entity Resolution ParagSingla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains”(AAAI-06). Singla, P., & Domingos, P. (2006). Memory-efficent inference in relatonal domains. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (pp. 500-505). Boston, MA: AAAI Press. H. Poon & P. Domingos, Sound and Efficient Inference with Probabilistic and Deterministic Dependencies”, in Proc. AAAI-06, Boston, MA, 2006. P. Hoifung (2006). Efficent inference. In Proceedings of the Twenty-First National Conference on Artificial Intelligence. De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  32. Implementing ER using MLNs:Types and Predicates token = {Parag, Singla, and, Pedro, ...} field = {Author, Title, Venue} citation = {C1, C2, ...} position = {0, 1, 2, ...} Token(token, position, citation) InField(position, field, citation) SameField(field, citation, citation) SameCit(citation, citation) De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  33. Implementing ER using MLNs:Types and Predicates token = {Parag, Singla, and, Pedro, ...} field = {Author, Title, Venue, ...} citation = {C1, C2, ...} position = {0, 1, 2, ...} Token(token, position, citation) InField(position, field, citation) SameField(field, citation, citation) SameCit(citation, citation) Optional De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  34. Implementing ER using MLNs:Types and Predicates token = {Parag, Singla, and, Pedro, ...} field = {Author, Title, Venue} citation = {C1, C2, ...} position = {0, 1, 2, ...} Token(token, position, citation) InField(position, field, citation) SameField(field, citation, citation) SameCit(citation, citation) Will be theevidence De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  35. Implementing ER using MLNs:Types and Predicates token = {Parag, Singla, and, Pedro, ...} field = {Author, Title, Venue} citation = {C1, C2, ...} position = {0, 1, 2, ...} Token(token, position, citation) InField(position, field, citation) SameField(field, citation, citation) SameCit(citation, citation) Does the segmentation Do the resolution, i.e., the say which parts are the same Query predicates De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  36. Implementing ER using MLNs:Formulas Encoding an HMM/CRF Token(+t,i,c) => InField(i,+f,c) InField(i,+f,c) <=> InField(i+1,+f,c) f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c)) Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’) SameField(+f,c,c’) <=> SameCit(c,c’) SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”) SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”) De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  37. Implementing ER using MLNs:Formulas Encoding an HMM/CRF Token(+t,i,c) => InField(i,+f,c) InField(i,+f,c) <=> InField(i+1,+f,c) f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c)) Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’) SameField(+f,c,c’) <=> SameCit(c,c’) SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”) SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”) Encoding the ER De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  38. Results: Segmentation on Cora De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

  39. What have we learnt? • Markov logic networks combine first-order logic and Markov networks • Syntax: First-order logic + Weights • Semantics: Templates for Markov networks • There are implementations available • Important real world SRL problems easily formulated as MLNs • Not touched: continuous RVs, tractable versions, compilation, efficient learning, … De Raedt, Kersting, Natarajan, Poole: Statistical Relational AI

More Related