800 likes | 945 Views
Probabilistic Inference Lecture 1. M. Pawan Kumar pawan.kumar@ecp.fr. Slides available online http:// cvc.centrale-ponts.fr /personnel/ pawan /. About the Course. 7 lectures + 1 exam Probabilistic Models – 1 lecture Energy Minimization – 4 lectures Computing Marginals – 2 lectures
E N D
Probabilistic InferenceLecture 1 M. Pawan Kumar pawan.kumar@ecp.fr Slides available online http://cvc.centrale-ponts.fr/personnel/pawan/
About the Course • 7 lectures + 1 exam • Probabilistic Models – 1 lecture • Energy Minimization – 4 lectures • Computing Marginals – 2 lectures • Related Courses • Probabilistic Graphical Models (MVA) • Structured Prediction
Instructor • Assistant Professor (2012 – Present) • Center for Visual Computing • 12 Full-time Faculty Members • 2 Associate Faculty Members • Research Interests • Probabilistic Models • Machine Learning • Computer Vision • Medical Image Analysis
Students • Third year at ECP • Specializing in Machine Learning and Vision • Prerequisites • Probability Theory • Continuous Optimization • Discrete Optimization
Outline • Probabilistic Models • Conversions • Exponential Family • Inference Example (on board) !!
Outline • Probabilistic Models • Markov Random Fields (MRF) • Bayesian Networks • Factor Graphs • Conversions • Exponential Family • Inference
MRF Unobserved Random Variables Neighbors Edges define a neighborhood over random variables
MRF V1 V2 V3 V4 V5 V6 V7 V8 V9 = {l1, l2,…, lh} Variable Va takes a value or a label va from a set L V = v is called a labeling Discrete, Finite
MRF V1 V2 V3 V4 V5 V6 V7 V8 V9 MRF assumes the Markovian property for P(v)
MRF V1 V2 V3 V4 V5 V6 V7 V8 V9 Va is conditionally independent of Vb given Va’s neighbors Hammersley-Clifford Theorem
MRF Potential ψ12(v1,v2) V1 V2 V3 Potential ψ56(v5,v6) V4 V5 V6 V7 V8 V9 Probability P(v) can be decomposed into clique potentials
MRF Potential ψ1(v1,d1) d1 d2 d3 V1 V2 V3 Observed Data d4 d5 d6 V4 V5 V6 d7 d8 d9 V7 V8 V9 Probability P(v) proportional to Π(a,b)ψab(va,vb) Probability P(d|v) proportional to Πaψa(va,da)
MRF d1 d2 d3 V1 V2 V3 d4 d5 d6 V4 V5 V6 d7 d8 d9 V7 V8 V9 Πaψa(va,da) Π(a,b)ψab(va,vb) Probability P(v,d) = Z Z is known as the partition function
MRF d1 d2 d3 V1 V2 V3 d4 d5 d6 V4 V5 V6 High-order Potential ψ4578(v4,v5,v7,v8) d7 d8 d9 V7 V8 V9
Pairwise MRF Unary Potential ψ1(v1,d1) d1 d2 d3 V1 V2 V3 d4 d5 d6 Pairwise Potential ψ56(v5,v6) V4 V5 V6 d7 d8 d9 V7 V8 V9 Πaψa(va,da) Π(a,b)ψab(va,vb) Probability P(v,d) = Z Z is known as the partition function
MRF d1 d2 d3 V1 V2 V3 d4 d5 d6 V4 V5 V6 d7 d8 d9 V7 V8 V9 A is conditionally independent of B given C if there is no path from A to B when C is removed
Conditional Random Fields (CRF) d1 d2 d3 V1 V2 V3 d4 d5 d6 V4 V5 V6 d7 d8 d9 V7 V8 V9 CRF assumes the Markovian property for P(v|d) Hammersley-Clifford Theorem
CRF d1 d2 d3 V1 V2 V3 d4 d5 d6 V4 V5 V6 d7 d8 d9 V7 V8 V9 Probability P(v|d) proportional to Πaψa(va;d) Π(a,b)ψab(va,vb;d) Clique potentials that depend on the data
CRF d1 d2 d3 V1 V2 V3 d4 d5 d6 V4 V5 V6 d7 d8 d9 V7 V8 V9 Πaψa(va;d)Π(a,b)ψab(va,vb;d) Probability P(v|d) = Z Z is known as the partition function
MRF and CRF V1 V2 V3 V4 V5 V6 V7 V8 V9 Πaψa(va) Π(a,b)ψab(va,vb) Probability P(v) = Z
Outline • Probabilistic Models • Markov Random Fields (MRF) • Bayesian Networks • Factor Graphs • Conversions • Exponential Family • Inference
Bayesian Networks V1 V2 V3 V4 V5 V6 V7 V8 Directed Acyclic Graph (DAG) – no directed loops Ignoring directionality of edges, a DAG can have loops
Bayesian Networks V1 V2 V3 V4 V5 V6 V7 V8 Bayesian Network concisely represents the probability P(v)
Bayesian Networks V1 V2 V3 V4 V5 V6 V7 V8 Probability P(v) = Πa P(va|Parents(va)) P(v1)P(v2|v1)P(v3|v1)P(v4|v2)P(v5|v2,v3)P(v6|v3)P(v7|v4,v5)P(v8|v5,v6)
Bayesian Networks Courtesy Kevin Murphy
Bayesian Networks V1 V2 V3 V4 V5 V6 V7 V8 Va is conditionally independent of its ancestors given its parents
Bayesian Networks Conditional independence of A and B given C Courtesy Kevin Murphy
Outline • Probabilistic Models • Markov Random Fields (MRF) • Bayesian Networks • Factor Graphs • Conversions • Exponential Family • Inference
Factor Graphs V1 V2 V3 a b c d e V4 V5 V6 f g Two types of nodes: variable nodes and factor nodes Bipartite graph between the two types of nodes
Factor Graphs ψa(v1,v2) V1 V2 V3 a b c d e V4 V5 V6 f g Factor graphs concisely represents the probability P(v)
Factor Graphs ψa({v}a) V1 V2 V3 a b c d e V4 V5 V6 f g Factor graphs concisely represents the probability P(v)
Factor Graphs ψb(v2,v3) V1 V2 V3 a b c d e V4 V5 V6 f g Factor graphs concisely represents the probability P(v)
Factor Graphs ψb({v}b) V1 V2 V3 a b c d e V4 V5 V6 f g Factor graphs concisely represents the probability P(v)
Factor Graphs ψb({v}b) V1 V2 V3 a b c d e V4 V5 V6 f g Πaψa({v}a) Probability P(v) = Z Z is known as the partition function
Outline • Probabilistic Models • Conversions • Exponential Family • Inference
Outline • Probabilistic Models • Conversions • Exponential Family • Inference
Motivation Random Variable V Label set L = {l1, l2,…, lh} Samples V1, V2, …, Vm that are i.i.d. Functions ϕα: LReals α indexes a set of functions Empirical expectations: μα = (Σiϕα(Vi))/m Expectation wrt distribution P: EP[ϕα(V)] = Σiϕα(li)P(li) Given empirical expectations, find compatible distribution Underdetermined problem
Maximum Entropy Principle max Entropy of the distribution s.t. Distribution is compatible
Maximum Entropy Principle max -Σi P(li)log(P(li)) s.t. Distribution is compatible
Maximum Entropy Principle max -Σi P(li)log(P(li)) s.t.Σiϕα(li)P(li) = μα for all α Σi P(li) = 1 P(v) proportional to exp(-Σαθαϕα(v))
Exponential Family Random Variable V = {V1, V2, …,Vn} Label set L = {l1, l2,…, lh} Labeling V = v, va L for all a {1, 2,…, n} Functions ϕα: LnReals α indexes a set of functions P(v) = exp{-ΣαθαΦα(v) - A(θ)} Sufficient Statistics Normalization Constant Parameters
Minimal Representation P(v) = exp{-ΣαθαΦα(v) - A(θ)} Sufficient Statistics Normalization Constant Parameters No non-zero c such that Σα cαΦα(v) = Constant
Ising Model P(v) = exp{-ΣαθαΦα(v) - A(θ)} Random Variable V = {V1, V2, …,Vn} Label set L = {l1, l2}
Ising Model P(v) = exp{-ΣαθαΦα(v) - A(θ)} Random Variable V = {V1, V2, …,Vn} Label set L = {-1, +1} Neighborhood over variables specified by edges E Sufficient Statistics Parameters for all Va V va θa vavb θab for all (Va,Vb) E
Ising Model P(v) = exp{-Σaθava -Σa,bθabvavb- A(θ)} Random Variable V = {V1, V2, …,Vn} Label set L = {-1, +1} Neighborhood over variables specified by edges E Sufficient Statistics Parameters for all Va V va θa vavb θab for all (Va,Vb) E
Interactive Binary Segmentation Foreground histogram of RGB values FG Background histogram of RGB values BG ‘+1’ indicates foreground and ‘-1’ indicates background