1 / 72

Probabilistic Object Recognition and Localization

Probabilistic Object Recognition and Localization. Bernt Schiele, Alex Pentland, ICCV ‘99 Presenter: Matt Grimes. What they did. Chose a set of local image descriptors whose outputs are robust to object orientation and lighting. Examples:. First-derivative magnitude:. Laplacian.

jnicholas
Download Presentation

Probabilistic Object Recognition and Localization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Probabilistic Object Recognition and Localization Bernt Schiele, Alex Pentland, ICCV ‘99 Presenter: Matt Grimes

  2. What they did • Chose a set of local image descriptors whose outputs are robust to object orientation and lighting. • Examples: First-derivative magnitude: Laplacian

  3. What they did • Learn a PDF for the outputs of these descriptors given an image of the object: Object orientation, lighting, etc. Vector of descriptor outputs A particular object

  4. What they did • Learn a PDF for the outputs of these descriptors given an image of the object: Vector of descriptor outputs A particular object

  5. What they did • Use Bayes’ rule to obtain the posterior… • …which is the probability of an image containing an object, given local image measurements M. • (Not quite this clean)

  6. History of image-based object recognition Two major genres: • Histogram-based approaches. • Comparison of local image features.

  7. Histogramming approaches • Object recognition by color histograms (Swain & Ballard, IJCV 1991) • Robust to changes in orientation, scale. • Brittle against lighting changes (dependency on color). • Many classes of objects not distinguishable by color distribution alone.

  8. Histogramming approaches • Combat color-brittleness using (quasi-) invariants of color histograms: • Eigenvalues of matrices of moments of color histograms • Derivatives of logs of color channels • “Comprehensive color normalization”

  9. Histogramming approaches • Comprehensive color normalization examples:

  10. Histogramming approaches • Comprehensive color normalization examples:

  11. Localized feature approaches • Approaches include: • Using image “interest-points” to index into a hashtable of known objects. • Comparing large vectors of local filter responses.

  12. Geometric Hashing • An interest point detector finds the same points on an object in different images. Types of “interest points” include corners, T-junctions, sudden texture changes.

  13. Geometric Hashing From Schmid, Mohr, Bauckhage, “Comparing and Evaluating Interest Points,” ICCV ‘98

  14. Geometric Hashing From Schmid, Mohr, Bauckhage, “Comparing and Evaluating Interest Points,” ICCV ‘98

  15. Geometric Hashing • Store points in an affine-transform-invariant representation. • Store all possible triplets of points as keys in a hashtable.

  16. Geometric Hashing • For object recognition, find all triplets of interest points in an image, look for matches in the hashtable, accumulate votes for the correct object. Hashtable approaches support multiple object recognition within the same image.

  17. Geometric hashing weaknesses • Dependent on the consistency of the interest point detector used. From Schmid, Mohr, Bauckhage, “Comparing and Evaluating Interest Points,” ICCV ‘98

  18. Geometric hashing weaknesses • Shoddy repeatibility necessitates lots of points. • Lots of points, combined with noise, leads to lots of false positives.

  19. Vectors of filter responses • Typically use vectors of oriented filters at fixed grid points, or at interest points. • Pros: • Very robust to noise. • Cons: • Fixed grid needs large representation, large grid is sensitive to occlusion. • If using an interest point detector instead, the detector must be consistent over a variety of scenes.

  20. Also: eigenpictures • Calculate the eigenpictures of a set of images of objects to be recognized. • Pros: • Efficient representation of images by their eigenpicture coefficients. (Fast searches) • Cons: • Images must be pre-segmented. • Eigenpictures are not local (sensitive to occlusion). • Translation, image-plane rotation must be represented in the eigenpictures.

  21. This paper: • Uses vectors of filter responses, with probabilistic object recognition. Learned from training images Using scene-invariant M Bayes’ rule

  22. Wins of this paper • Uses hashtables for multiple object recognition. • Unlike geometric hashing, doesn’t depend on point correspondence betw. images. • Uses location-unspecific filter responses, not points. • Inherits robustness to noise of filter response methods.

  23. Wins of this paper • Uses local filter responses. • Robust to occlusion compared to global methods (e.g. eigenpictures or filter grids.) • Probabilistic matching • Theoretically cleaner than voting. • Combined with local filter responses, allows for localization of detected objects.

  24. Details of the PDF • What degrees of freedom are there in the “other” parameters? on: Object R: Rotation (3 DOF) T: Translation(3 DOF) S: Scene (occlusions, background) L: Lighting I: Imaging (noise, pixelation/blur)

  25. P(M|on,R,T,S,L,I) • Way too many params to get a reliable estimate from even a large image library. • # of examples needed is exponential in the number of dimensions of the PDF. • Solution: choose measurements (M) that are invariant with respect to as many params as possible (except on).

  26. Techniques for invariance • Imaging (noise:) see Schiele’s thesis. • Lighting: apply a “energy normalization technique” to the filter outputs. • Scene: probabilistic object recognition + local image measurements. • Gives best estimate using the visible portion of the object.

  27. Techniques for invariance • Translation: • Tx, Ty (image-plane translation) are ignored for non-localizing recognition. • Tz is equivalent to scale. For known scales, compensate by scaling the filters’ regions of support.

  28. Techniques for invariance • Fairly robust to unknown scale:

  29. Techniques for invariance • Rotation: • Rz: rotation in the image plane. Filters invariant to image-plane rotation may be used. • Rx, Ry must remain in the PDF. Impossible to have viewpoint- invariant descriptors in the general case.

  30. New PDF • 4 parameters. • Still a large amount of training examples needed, but feasible. • Example: algorithm has been successful after training with 108 images per object. (108 = 16 orientations * 6 scales)

  31. Learning & representation of the PDF • Since the goal is discrimination, overgeneralization is scarier than overfitting. • They chose multidimensional histograms over parametric representations. • They mention that they could’ve used kernel function estimates.

  32. Multidimensional Histograms

  33. Multidimensional Histograms • In their experiments, they use a 6-dimensional histogram. • X and Y derivative, at 3 different scales • …with 24 buckets per axis. • Theoretical max for # of cells: 246=1.9 x 108 • Way too many cells to be meaningfully filled by even 512 x 512 (=262144 ) pixel images.

  34. Multidimensional Histograms • Somehow, by exploiting dependencies betw. histogram axes, and applying a uniform prior bias, they get the number of calculated cells below 105. • Factor of 1000 reduction. • Anybody know how they do this?

  35. (Single) object recognition

  36. (Single) object recognition

  37. (Single) object recognition • A single measurement vector mk is insufficient for recognition.

  38. (Single) object recognition • A single measurement vector mk is insufficient for recognition.

  39. (Single) object recognition • For k measurement vectors:

  40. (Single) object recognition

  41. (Single) object recognition

  42. (Single) object recognition

  43. (Single) object recognition • Measurement regions covering 10~20% of an object are usually sufficient for discrimination.

  44. (Single) object recognition

  45. Multiple object recognition • We can apply the single-object detector to many small regions in the image.

  46. Multiple object recognition • The algorithm is now O(NKJ) • N = # of known objects • K = # of measurement vectors in each region • J = # of regions

  47. Multiple object recognition

  48. Multiple object recognition

  49. Multiple object recognition

  50. Multiple object recognition

More Related