240 likes | 501 Views
Review. Markov Logic Networks Mathew Richardson Pedro Domingos. Xinran (Sean) Luo , u0866707. O verview. Markov Networks First-order Logic Markov Logic Networks Inference Learning Experiments. Markov Networks. Also known as Markov random fields. Composed of
E N D
Review Markov Logic Networks Mathew Richardson Pedro Domingos Xinran(Sean) Luo, u0866707
Overview • Markov Networks • First-order Logic • Markov Logic Networks • Inference • Learning • Experiments
Markov Networks • Also known as Markov random fields. • Composed of • An undirected graph G • A set of potential function φk • Function: • And x{k} is the state of kth clique. Z is partition function:
Markov Networks • Log-linear models: each clique potential function is replaced by an exponentiated weighted sum of features of the state:
Overview • Markov Networks • First-order Logic • Markov Logic Networks • Inference • Learning • Experiments
First-order Logic • A set of sentences or formulas in first-order logic. • Constructed by the symbols: connective, quanitfier, constants, variables, functions, predicates, etc.
Syntax for First-Order Logic • Connective → ∨ | ∧ | ⇒ | ⇔ • Quanitfier → ∃ | ∀ • Constant → A | John | Car1 • Variable → x | y | z |... • Predicate → Brother | Owns | ... • Function → father-of | plus | ...
Overview • Markov Networks • First-order Logic • Markov Logic Networks • Inference • Learning • Experiments
Markov Logic Networks • A Markov Logic Network (MLN) L is a set of pairs (Fi, wi)where • Fiis a formula in first-order logic • wiis a real number
Features of Markov Logic Network • It defines a Markov network ML,C with: • For each possible grounding of each predicate in L, there is a binary node in ML,C. If the ground atom is true, the node is 1. Otherwise, 0. • For each possible grounding of each formula in L, there is a feature node in ML,C. If the ground formula is true, the feature is 1. Otherwise, 0.
Ground Term • A ground term is a term containing no variables. • Ground Markov Network: MLNs have certain regularities in structure and parameters. • MLN is templatefor ground Markov networks
Example of an MLN Suppose we have two constants: Anna (A) and Bob (B) Smokes(A) Smokes(B) Cancer(A) Cancer(B)
Example of an MLN Suppose we have two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Friends(B,B) Friends(B,A)
Example of an MLN Suppose we have two constants: Anna (A) and Bob (B) Friends(A,B) Friends(A,A) Smokes(A) Smokes(B) Friends(B,B) Cancer(A) Cancer(B) Friends(B,A)
MLNs and First-Order Logic • First-order KB assign a weight to each formula MLN. • Satisfiable KB + positive weights to each formula MLN represents a uniform distribution over the worlds. • MLN produce useful results even contains contradictions.
Overview • Markov Networks • First-order Logic • Markov Logic Networks • Inference • Learning • Experiments
Inference • Already know the probability of formula F1, what is the probability of F2? • Two steps (Approximate): • Find the minimal subset of the ground network. • (MCMC-Gibbs algorithm) Sampling one ground atom given its Markov blanket (the set of ground atoms that appear in some grounding of a formula with it).
Inference • The probability of a ground atom Xl when its Markov blanket Bl is in state bl is: • is the value of 0 or 1.
Overview • Markov Networks • First-order Logic • Markov Logic Networks • Inference • Learning • Experiments
Learning • Data is from a relational database • Strategy: • Counting the number of true groundings of formula in DB. • Use Pseudo-Likelihood to get gradient. is the number of true groundings of the ith formula when we force Xl =0 and leave the remaining data unchanged, and similarly for
Overview • Markov Networks • First-order Logic • Markov Logic Networks • Inference • Learning • Experiments
Experiments • Hand-built knowledge base (KB) • ILP: CLAUDIEN • Markov logic networks (MLNs) • Using KB • Using CLAUDIEN • Using KB + CLAUDIEN • Bayesian network learner • Naïve Bayes
Summary • Markov logic networks combine first-order logic and Markov networks • Syntax: First-order logic + Positive Weights • Semantics: Templates for Markov networks • Inference: Minimal subset + Gibbs • Learning:Pseudo-likelihood