750 likes | 767 Views
Markov Logic. Parag Singla Dept. of Computer Science University of Texas, Austin. Overview. Motivation Background Markov logic Inference Learning Applications. The Interface Layer. Applications. Interface Layer. Infrastructure. Networking. WWW. Email. Applications. Internet.
E N D
Markov Logic Parag Singla Dept. of Computer Science University of Texas, Austin
Overview • Motivation • Background • Markov logic • Inference • Learning • Applications
The Interface Layer Applications Interface Layer Infrastructure
Networking WWW Email Applications Internet Interface Layer Protocols Infrastructure Routers
Databases ERP CRM Applications OLTP Interface Layer Relational Model Transaction Management Infrastructure Query Optimization
Artificial Intelligence Planning Robotics Applications NLP Multi-Agent Systems Vision Interface Layer Representation Inference Infrastructure Learning
Artificial Intelligence Planning Robotics Applications NLP Multi-Agent Systems Vision Interface Layer First-Order Logic? Representation Inference Infrastructure Learning
Artificial Intelligence Planning Robotics Applications NLP Multi-Agent Systems Vision Interface Layer Graphical Models? Representation Inference Infrastructure Learning
Artificial Intelligence Planning Robotics Applications NLP Multi-Agent Systems Vision Interface Layer Statistical + Logical AI Representation Inference Infrastructure Learning
Artificial Intelligence Planning Robotics Applications NLP Multi-Agent Systems Vision Interface Layer Markov Logic Representation Inference Infrastructure Learning
Overview • Motivation • Background • Markov logic • Inference • Learning • Applications
Markov Networks Smoking Cancer • Undirected graphical models Asthma Cough • Potential functions defined over cliques
Markov Networks Smoking Cancer • Undirected graphical models Asthma Cough • Log-linear model: Weight of Feature i Feature i
First-Order Logic • Constants, variables, functions, predicates • Anna, x, MotherOf(x), Friends(x,y) • Grounding: Replace all variables by constants • Friends (Anna, Bob) • Formula: Predicates connected by operators • Smokes(x) Cancer(x) • Knowledge Base (KB): A set of formulas • Can be equivalently converted into a clausal form • World: Assignment of truth values to all ground predicates
Overview • Motivation • Background • Markov logic • Inference • Learning • Applications
Markov Logic • A logical KB is a set of hard constraintson the set of possible worlds • Let’s make them soft constraints:When a world violates a formula,It becomes less probable, not impossible • Give each formula a weight(Higher weight Stronger constraint)
Definition • A Markov Logic Network (MLN) is a set of pairs (F, w) where • F is a formula in first-order logic • w is a real number • Together with a finite set of constants,it defines a Markov network with • One node for each grounding of each predicate in the MLN • One feature for each grounding of each formula F in the MLN, with the corresponding weight w
Example: Friends & Smokers Two constants: Ana (A) and Bob (B)
Example: Friends & Smokers Two constants: Ana (A) and Bob (B) Smokes(A) Smokes(B) Cancer(A) Cancer(B)
Example: Friends & Smokers Two constants: Ana (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A)
Example: Friends & Smokers Two constants: Ana (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A)
Example: Friends & Smokers Two constants: Ana (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A)
Example: Friends & Smokers State of the World {0,1} Assignment to the nodes Two constants: Ana (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A)
Markov Logic Networks • MLN is template for ground Markov networks • Probability of a world x: • One feature for each ground formula
Weight of formula i No. of true groundings of formula i in x Markov Logic Networks • MLN is template for ground Markov nets • Probability of a world x:
Special cases: Markov networks Markov random fields Bayesian networks Log-linear models Exponential models Max. entropy models Gibbs distributions Boltzmann machines Logistic regression Hidden Markov models Conditional random fields Obtained by making all predicates zero-arity Markov logic allows objects to be interdependent (non-i.i.d.) Relation to Statistical Models
Relation to First-Order Logic • Infinite weights First-order logic • Satisfiable KB, positive weights Satisfying assignments = Modes of distribution • Markov logic allows contradictions between formulas • Markov logic in infinite domains
Overview • Motivation • Background • Markov logic • Inference • Learning • Applications
Inference Friends(A,B) Smokes(A)? Smokes(B)? Friends(A,A) Friends(B,B) Cancer(A) Cancer(B)? Friends(B,A)
Most Probable Explanation (MPE)Inference Friends(A,B) Smokes(A)? Smokes(B)? Friends(A,A) Friends(B,B) Cancer(A) Cancer(B)? Friends(B,A) What is the most likely state of Smokes(A), Smokes(B), Cancer(B)?
MPE Inference • Problem: Find most likely state of world given evidence Query Evidence
MPE Inference • Problem: Find most likely state of world given evidence
MPE Inference • Problem: Find most likely state of world given evidence
MPE Inference • Problem: Find most likely state of world given evidence • This is just the weighted MaxSAT problem • Use weighted SAT solver(e.g., MaxWalkSAT [Kautz et al. 97])
Computing Probabilities: Marginal Inference Friends(A,B) Smokes(A)? Smokes(B)? Friends(A,A) Friends(B,B) Cancer(A) Cancer(B)? Friends(B,A) What is the probability Smokes(B) = 1?
Marginal Inference • Problem: Find the probability of query atoms given evidence Query Evidence
Marginal Inference • Problem: Find the probability of query atoms given evidence Computing Zx takes exponential time!
Belief Propagation • Bipartite network of nodes (variables)and features • In Markov logic: • Nodes = Ground atoms • Features = Ground clauses • Exchange messages until convergence • Messages • Current approximation to node marginals
Belief Propagation • Smokes(Ana) Friends(Ana, Bob) • Smokes(Bob) Smokes(Ana) Features (f) Nodes (x)
Belief Propagation Features (f) Nodes (x)
Belief Propagation Features (f) Nodes (x)
Lifted Belief Propagation Features (f) Nodes (x)
Lifted Belief Propagation Features (f) Nodes (x)
Lifted Belief Propagation , : Functions of edge counts Features (f) Nodes (x)
Overview • Motivation • Background • Markov logic • Inference • Learning • Applications
Learning Parameters Three constants: Ana, Bob, John
Learning Parameters Three constants: Ana, Bob, John
Learning Parameters Three constants: Ana, Bob, John Closed World Assumption: Anything not in the database is assumed false.