400 likes | 622 Views
Data-Driven M arkov C hain M onte C arlo. Presented by Tomasz Malisiewicz for Advanced Perception 3/1/2006. Overview of Talk. What is Image Segmentation? How to find a good segmentation? DD MCMC results. Image segmentation in a Bayesian statistical framework.
E N D
Data-Driven Markov Chain Monte Carlo Presented by Tomasz Malisiewicz for Advanced Perception 3/1/2006
Overview of Talk • What is Image Segmentation? • How to find a good segmentation? • DDMCMC results Image segmentation in a Bayesian statistical framework Markov Chain Monte Carlo for exploring the space of all segmentations Data-Driven methods for exploiting image data and speeding up MCMC
DDMCMC Motivation • Iterative approach: consider many different segmentations and keep the good ones • Few tunable parameters, ex) # of segments encoded into prior • DDMCMC vs Ncuts
Berkeley Segmentation Database Image 326038 Berkeley Ncuts K=30 DDMCMC
Why a rigorous formulation? • Allows us to define what we want the segmentation algorithm to return • Assigning a Score to a segmentation
Formulation #1(and you thought you knew what image segmentation was) • Image Lattice: • Image: • For any point either or • Lattice partition into K disjoint regions: • Region is discrete label map: • Region Boundary is Continuous: An image partition into disjoint regions is not An image segmentation! Regions Contents Are Key!
Formulation #2(and you thought you knew what image segmentation was) • Each Image Region is a realization from a probabilistic model • are parameters of model indexed by • A segmentation is denoted by a vector of hidden variables W; K is number of regions • Bayesian Framework: Space of all segmentations Posterior Likelihood Prior
Prior over segmentations(do you like exponentials?) # of model params Want less regions Want round-ish regions ~ uniform Want less complex models Want small regions
Likelihood for Images • Visual Patterns are independent stochastic processes • is model-type index • is model parameter vector • is image appearance in i-th region Grayscale Color
Four Gray-level Models Uniform Clutter Texture Shading • Gray-level model space: Gaussian Intensity Histogram FB Response Histogram B-Spline
Three Color Models (L*,u*,v*) • Gaussian • Mixture of 2 Gaussians • Bezier Spline • Color model space:
Calibration • Likelihoods are calibrated using empirical study • Calibration required to make likelihoods for different models comparable (necessary for model competition) Principled? or Hack?
What did we just do? Def. of Segmentation: Score (probability) of Segmentation: Likelihood of Image = product of region likelihoods Regions defined by k-partition:
What do we do with scores? Search
Search through what? Anatomy of Solution Space • Space of all k-partitions • General partition space • Space of all segmentations or Scene Space Partition space K Model spaces
Searching through segmentations Exhaustive Enumeration of all segmentations Takes too long! Greedy Search (Gradient Ascent) Local minima! Stochastic Search Takes too long MCMC based exploration Described in the rest of this talk!
Why MCMC • What is it? • What does it do? -A clever way of searching through a high-dimensional space -A general purpose technique of generating samples from a probability -Iteratively searches through space of all segmentations by constructing a Markov Chain which converges to stationary distribution
Designing Markov Chains • Three Markov Chain requirements • Ergodic: from an initial segmentation W0, any other state W can be visited in finite time (no greedy algorithms); ensured by jump-diffusion dynamics • Aperiodic: ensured by random dynamics • Detailed Balance: every move is reversible
5 Dynamics 1.) Boundary Diffusion 2.) Model Adaptation 3.) Split Region 4.) Merge Region 5.) Switch Region Model At each iteration, we choose a dynamic with probability q(1),q(2),q(3),q(4),q(5)
Dynamics 1: Boundary Diffusion • Diffusion* within Temperature Decreases over Time Brownian Motion Along Curve Normal Boundary Between Regions i and j *Movement within partition space
Dynamics 2: Model Adaptation • Fit the parameters* of a region by steepest ascent *Movement within cue space
Dynamics 3-4: Split and Merge Remaining Variables Are unchanged • Split one region into two Probability of Proposed Split Data-Driven Speedup Conditional Probability of how likely chain proposes to move to W’ from W
Dynamics 3-4: Split and Merge Remaining Variables Are unchanged • Merge two Regions Data-Driven Speedup Probability of Proposed Merge
Dynamics 5: Model Switching • Change models • Proposal Probabilities Data-Driven Speedup
Motivation of DD • Region Splitting: How to decide where to split a region? • Model Switching: Once we switch to a new model, what parameters do we jump to? vs Model Adaptation Required some initial parameter vector
Data Driven Methods • Focus on boundaries and model parameters derived from data: compute these before MCMC starts • Cue Particles: Clustering in Model Space • K-partition Particles: Edge Detection • Particles Encode Probabilities Parzen Window Style
Cue Particles In Action Clustering in Color Space
K-partition Particles in Action • Edge detection gives us a good idea of where we expect a boundary to be located
Particles or Parzen Window* Locations? • What is this particle business about? • A particle is just the position of a parzen-window which is used for density estimation 1D particles *Parzen Windowing also known as: Kernel Density Estimation, Non-parametric density estimation
Are you awake: What did we just do? So what type of answer does the Markov Chain return? What can we do with this answer? How many answers to we want? • Scores (Probability of Segmentation) Search • 5 MCMC dynamics • Data-Driven Speedup (key to making MCMC work in finite time)
Multiple Solutions • MAP gives us one solution • Output of MCMC sampling How do we get multiple solutions? Parzen Windows: Again Scene Particles
Why multiple solutions? • Segmentation is often not the final stage of computation • A higher level task such as recognition can utilize a segmentation • We don’t want to make any hard decision before recognition • multiple segmentations = good idea
K-adventurers • We want to keep a fixed number K of segmentations but we don’t want to keep trivially different segmentations • Goal: Keep the K segmentations that best preserve the posterior probability in KL-sense • Greedy Algorithm: - Add new particle, remove worst particle
Results (Color Images) http://www.stat.ucla.edu/~ztu/DDMCMC/benchmark_color/benchmark_color.htm
Conclusions • DDMCMC: Combines Generative (top-down) and Discriminative (bottom-up) approaches • Traverse the space of all segmentations via Markov Chains • Does your head hurt? • Questions?
References • DDMCMC Paper: http://www.cs.cmu.edu/~efros/courses/AP06/Papers/tu-pami-02.pdf • DDMCMC Website: http://www.stat.ucla.edu/%7Eztu/DDMCMC/DDMCMC_segmentation.htm • MCMC Tutorial by Authors: http://civs.stat.ucla.edu/MCMC/MCMC_tutorial.htm