1 / 61

O BJ C UT

UNIVERSITY OF OXFORD. O BJ C UT. M. Pawan Kumar Philip Torr Andrew Zisserman. Aim. Given an image, to segment the object. Object Category Model. Segmentation. Cow Image. Segmented Cow. Segmentation should (ideally) be shaped like the object e.g. cow-like

Download Presentation

O BJ C UT

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. UNIVERSITY OF OXFORD OBJ CUT M. Pawan Kumar Philip Torr Andrew Zisserman

  2. Aim • Given an image, to segment the object Object Category Model Segmentation Cow Image Segmented Cow • Segmentation should (ideally) be • shaped like the object e.g. cow-like • obtained efficiently in an unsupervised manner • able to handle self-occlusion

  3. Challenges Intra-Class Shape Variability Intra-Class Appearance Variability Self Occlusion

  4. Motivation Magic Wand • Current methods require user intervention • Object and background seed pixels (Boykov and Jolly, ICCV 01) • Bounding Box of object (Rother et al. SIGGRAPH 04) Object Seed Pixels Cow Image

  5. Motivation Magic Wand • Current methods require user intervention • Object and background seed pixels (Boykov and Jolly, ICCV 01) • Bounding Box of object (Rother et al. SIGGRAPH 04) Object Seed Pixels Background Seed Pixels Cow Image

  6. Motivation Magic Wand • Current methods require user intervention • Object and background seed pixels (Boykov and Jolly, ICCV 01) • Bounding Box of object (Rother et al. SIGGRAPH 04) Segmented Image

  7. Motivation Magic Wand • Current methods require user intervention • Object and background seed pixels (Boykov and Jolly, ICCV 01) • Bounding Box of object (Rother et al. SIGGRAPH 04) Object Seed Pixels Background Seed Pixels Cow Image

  8. Motivation Magic Wand • Current methods require user intervention • Object and background seed pixels (Boykov and Jolly, ICCV 01) • Bounding Box of object (Rother et al. SIGGRAPH 04) Segmented Image

  9. Motivation • Problem • Manually intensive • Segmentation is not guaranteed to be ‘object-like’ Non Object-like Segmentation

  10. Our Method • Combine object detection with segmentation • Borenstein and Ullman, ECCV ’02 • Leibe and Schiele, BMVC ’03 • Incorporate global shape priors in MRF • Detection provides • Object Localization • Global shape priors • Automatically segments the object • Note our method is completely generic • Applicable to any object category model

  11. Outline • Problem Formulation • Form of Shape Prior • Optimization • Results

  12. Problem • Labelling m over the set of pixels D • Shape prior provided by parameter  • Energy E (m, ) = ∑x(D|mx)+x(mx| ) + ∑xy(mx,my)+ (D|mx,my) • Unary terms • Likelihood based on colour • Unary potential based on distance from  • Pairwise terms • Prior • Contrast term • Find best labelling m* = arg min ∑ wi E (m, i) • wi is the weight for sample i Unary terms Pairwise terms

  13. MRF • Probability for a labellingconsists of • Likelihood • Unary potential based on colour of pixel • Prior which favours same labels for neighbours (pairwise potentials) Pairwise Potential xy(mx, my) mx m(labels) my Unary Potential x(D|mx) x y D(pixels) Image Plane

  14. Example Cow Image Object Seed Pixels Background Seed Pixels x(D|obj) x … x …  x(D|bkg)  xy(mx,my) y … y … … … … … Prior Likelihood Ratio (Colour)

  15. Example Cow Image Object Seed Pixels Background Seed Pixels Prior Likelihood Ratio (Colour)

  16. Contrast-Dependent MRF • Probability of labelling in addition has • Contrast term which favours boundaries to lie on image edges mx m(labels) my x Contrast Term (D|mx,my) y D(pixels) Image Plane

  17. Example Cow Image Object Seed Pixels Background Seed Pixels x(D|obj) x … x …  x(D|bkg) xy(mx,my)+ xy(D|mx,my) y … y … … … … … Prior + Contrast Likelihood Ratio (Colour)

  18. Example Cow Image Object Seed Pixels Background Seed Pixels Prior + Contrast Likelihood Ratio (Colour)

  19. Our Model • Probability of labelling in addition has • Unary potential which depend on distance from  (shape parameter)  (shape parameter) Unary Potential x(mx|) mx m(labels) my Object Category Specific MRF x y D(pixels) Image Plane

  20. Example Cow Image Object Seed Pixels Background Seed Pixels ShapePrior Distance from  Prior + Contrast

  21. Example Cow Image Object Seed Pixels Background Seed Pixels ShapePrior Likelihood + Distance from  Prior + Contrast

  22. Example Cow Image Object Seed Pixels Background Seed Pixels ShapePrior Likelihood + Distance from  Prior + Contrast

  23. Outline • Problem Formulation • Energy E (m, ) = ∑x(D|mx)+x(mx| ) + ∑xy(mx,my)+ (D|mx,my) • Form of Shape Prior • Optimization • Results

  24. Layered Pictorial Structures (LPS) • Generative model • Composition of parts + spatial layout Layer 2 Spatial Layout (Pairwise Configuration) Layer 1 Parts in Layer 2 can occlude parts in Layer 1

  25. Layered Pictorial Structures (LPS) Cow Instance Layer 2 Transformations 1 P(1) = 0.9 Layer 1

  26. Layered Pictorial Structures (LPS) Cow Instance Layer 2 Transformations 2 P(2) = 0.8 Layer 1

  27. Layered Pictorial Structures (LPS) Unlikely Instance Layer 2 Transformations 3 P(3) = 0.01 Layer 1

  28. LPS for Detection • Learning • Learnt automatically using a set of videos • Part correspondence using Shape Context Shape Context Matching Multiple Shape Exemplars

  29. LPS for Detection • Detection • Putative parts found using tree cascade of classifiers (x,y)

  30. LPS for Detection • MRF over parts • Labels represent putative poses • Prior (pairwise potential) - Robust Truncated Model • Match LPS by obtaining MAP configuration Linear Model Quadratic Model Potts Model

  31. LPS for Detection Efficient Belief Propagation xi • Likelihood i(xi) • tree cascade of classifiers • Prior ij(xi,xj) • fij(xi,xj), if xi  Ci(xj) • ij , otherwise • Pr(x)   i(xi)  ij(xi,xj) i xj xk j k mj->i ij i Messages j jk k ki

  32. LPS for Detection Efficient Belief Propagation xi • Likelihood i(xi) • tree cascade of classifiers • Prior ij(xi,xj) • fij(xi,xj), if xi  Ci(xj) • ij , otherwise • Pr(x)   i(xi)  ij(xi,xj) i xj xk j k Messages calculated as

  33. LPS for Detection Efficient Generalized Belief Propagation xi • Likelihood i(xi) • tree cascade of classifiers • Prior ij(xi,xj) • fij(xi,xj), if xi  Ci(xj) • ij , otherwise • Pr(x)   i(xi)  ij(xi,xj) i xj xk j k ij mk->ij i Messages j ijk jk k ki

  34. LPS for Detection Efficient Generalized Belief Propagation xi • Likelihood i(xi) • tree cascade of classifiers • Prior ij(xi,xj) • fij(xi,xj), if xi  Ci(xj) • ij , otherwise • Pr(x)   i(xi)  ij(xi,xj) i xj xk j k Messages calculated as

  35. LPS for Detection Second Order Cone Programming Relaxations xi • Likelihood i(xi) • tree cascade of classifiers • Prior ij(xi,xj) • fij(xi,xj), if xi  Ci(xj) • ij , otherwise • Pr(x)   i(xi)  ij(xi,xj) i xj xk j k

  36. LPS for Detection Second Order Cone Programming Relaxations 1 • Likelihood i(xi) • tree cascade of classifiers • Prior ij(xi,xj) • fij(xi,xj), if xi  Ci(xj) • ij , otherwise • Pr(x)   i(xi)  ij(xi,xj) 0 0 0 0 i 1 0 0 1 j k m - Concatenation of all binary vectors l - Likelihood vector P - Prior matrix

  37. LPS for Detection Second Order Cone Programming Relaxations 1 0 0 0 0 i 1 0 0 1 j k

  38. LPS for Detection Second Order Cone Programming Relaxations 1 0 0 0 0 i 1 0 0 1 j k

  39. LPS for Detection Second Order Cone Programming Relaxations 1 0 0 0 0 i 1 0 0 1 j k

  40. Outline • Problem Formulation • Form of Shape Prior • Optimization • Results

  41. Optimization • Given image D, find best labelling as m* = arg max p(m|D) • Treat LPS parameter as a latent (hidden) variable • EM framework • E : sample the distribution over  • M : obtain the labelling m

  42. E-Step • Given initial labelling m’, determine p( | m’,D) • Problem Efficiently sampling from p( | m’,D) • Solution • We develop efficient sum-product Loopy Belief Propagation (LBP) for matching LPS. • Similar to efficient max-product LBP for MAP estimate

  43. Results • Different samples localize different parts well. • We cannot use only the MAP estimate of the LPS.

  44. M-Step • Given samples from p( |m’,D), get new labelling mnew • Sample iprovides • Object localization to learn RGB distributions of object and background • Shape prior for segmentation • Problem • Maximize expected log likelihood using all samples • To efficiently obtain the new labelling

  45. M-Step w1 = P(1|m’,D) Cow Image Shape 1 RGB Histogram for Background RGB Histogram for Object

  46. M-Step w1 = P(1|m’,D) Cow Image Shape 1 1 m(labels) Image Plane D(pixels) • Best labelling found efficiently using a Single Graph Cut

  47. Segmentation using Graph Cuts Obj Cut x(D|bkg) + x(bkg|) x … • xy(mx,my)+ • xy(D|mx,my) y … … … m z … … z(D|obj) + z(obj|) Bkg

  48. Segmentation using Graph Cuts Obj x … y … … … m z … … Bkg

  49. M-Step w2 = P(2|m’,D) Cow Image Shape 2 RGB Histogram for Background RGB Histogram for Object

  50. M-Step w2 = P(2|m’,D) Cow Image Shape 2 2 m(labels) Image Plane D(pixels) • Best labelling found efficiently using a Single Graph Cut

More Related