1 / 26

Image Parsing: Unifying Segmentation, Detection and Recognition

Image Parsing: Unifying Segmentation, Detection and Recognition. Z. Tu, X. Chen, A. Yuille and S. Zhu. Presented by: Khurram Hassan Shafique. Related Work. Z. Tu and S.C. Zhu, “Image segmentation by Data Driven Markov Chain Monte Carlo,” PAMI, vol. 24, no. 5, 2002.

baxter
Download Presentation

Image Parsing: Unifying Segmentation, Detection and Recognition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Image Parsing:Unifying Segmentation, Detection and Recognition Z. Tu, X. Chen, A. Yuille and S. Zhu Presented by: Khurram Hassan Shafique

  2. Related Work • Z. Tu and S.C. Zhu, “Image segmentation by Data Driven Markov Chain Monte Carlo,” PAMI, vol. 24, no. 5, 2002. • Z. Tu and S. C. Zhu, “Parsing images into regions and curve processes,” ECCV, June 2002. • S. C. Zhu and A. L. Yuille, “Region Competition,” PAMI, vol. 18, no. 9, 1996

  3. Organization of Presentation • Problem Definition • Pre-requisites • Probabilistic Inference • Monte Carlo Simulation • Markov Chain Monte Carlo Simulation • Unifying Segmentation, Detection and Recognition.

  4. Partition the image into generic regions that best describe the image. • Locate and identify faces (if any) in the image. • Locate and identify text (if any) in the image. Problem Definition • Given an image,

  5. Probabilistic Inference • Obtaining representation of the parameters from the dataset is known as inference.

  6. Maximum Likelihood Principle Choose the world parameters that maximize the probability of the measurement observed. In the general case, we are choosing (where the maximum is only over the world parameters because the measurements are known) arg max P(measurements|parameters)

  7. Bayes Theorem Data Posterior Priors Bayes Theorem

  8. Maximum a posteriori (MAP) inference Choose the world parameters that maximize the conditional probability of the parameters, conditioned on the measurements taking the observed values.

  9. Monte Carlo Simulation • In Monte Carlo simulation, the random selection process is repeated many times to create multiple scenarios. Each time a value is randomly selected, it forms one possible scenario and solution to the problem. Together, these scenarios give a range of possible solutions, some of which are more probable and some less probable

  10. Monte Carlo Simulation • Randomly select a location within the rectangle • If it is within the blue area, record this instance a hit • Generate a new location and repeat 10,000 times

  11. Monte Carlo Simulation What is the area of blue region?

  12. Monte Carlo Simulation Given a set of random variables X={Xi} taking on values {xi}, The expectation of a function a(X) can be approximated by

  13. Monte Carlo Simulation

  14. Monte Carlo Simulation • Use I.I.D generated by distribution f(x), then we have

  15. Monte Carlo Standard Error

  16. Problems • It is not possible to obtain a sample of independent points from the distribution defined by f(x). • The probability density defined by f(x) may not only be concentrated in a tiny volume of the parameter space but also be distriuted across this space in a complex pattern.

  17. Other Techniques • Rejection Sampling • Importance Sampling • Methods ased on finding the modes. • Markov Chain Monte Carlo Sampling (MCMC)

  18. Markov Chains • A Markov chain is a series of random variables, X(0), X(1), … in which the influence of the values of X(0), …, X(n) on the distribution of X(n+1) is mediated entirely by the value of X(n). More formally, where

  19. Markov Chains • A Markov chain can e specified by giving the • Initial probabilities p0(x) of various states x and • The transition probabilities Tn(x,x’)for one state x’ to follow another state x at time n. • Using the transition probabilities, one can find the probability of state x occurring at time n+1.

  20. Markov Chains (Basic Definitions) • If the transition probabilities do not depend on the time, the Markov chain is said to be homogeneous or stationary. • A distribution (x) is invariant with respect to the Markov chain with transition probabilities Tn(x,x’), if for all n,

  21. Markov Chains(Basic Definitions) • Detailed Balance: this implies that  is an invariant distribution

  22. Markov Chains(Basic Definitions) • Ergodic Markov Chains: Markov chain is ergodic if the probabilities at time n, pn(x) converge to this invariant distribution as n, regardless of the choice of initial probabilities. The invariant distribution is also called equilibrium distribution.

  23. Markov Chain Monte Carlo

  24. Markov Chain Monte Carlo

  25. MCMC: Metropolis Algorithm • Proposal: Select a candidate state, x*, picked at random from a proposal distribution. • Metropolis Acceptance: Accpet this candidate state with probability A(x,x*); otherwise reject it and retain the current state.

More Related