1 / 27

Recognition by Probabilistic Hypothesis Construction

Recognition by Probabilistic Hypothesis Construction. P. Moreels, M. Maire, P. Perona California Institute of Technology. Background. Rich features. Probabilistic constellations, categories. Efficient matching. • Fischler & Elschlager, 1973 • v.d. Malsburg et al. ‘93. • Burl et al. ‘96

vangie
Download Presentation

Recognition by Probabilistic Hypothesis Construction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Recognition by Probabilistic Hypothesis Construction P. Moreels, M. Maire, P. Perona California Institute of Technology

  2. Background Rich features Probabilistic constellations, categories Efficient matching • Fischler & Elschlager, 1973 • v.d. Malsburg et al. ‘93 • Burl et al. ‘96 • Weber et al. ‘00 • Fergus et al. ‘03 • Huttenlocher & Ullman, 1990 Rich features, probabilistic, fast learning, efficient matching • Lowe ‘99, ‘04

  3. Outline Objective: Individual object recognition • D.Lowe, constellation model. • Hypothesis and score. • Scheduling of matches. • Experiments: compare with D.Lowe.

  4. Lowe’s recognition system Models Test image … Lowe’99,’04

  5. Constellation model Burl’96, Weber’00, Fergus’03

  6. Principled detection/recognition Learn parameters from data Model clutter, occlusion, distortions Pros and Cons Lowe’s recognition system Constellation model • Many parts  redundancy • Learn from 1 image • Fast + • High number of parameters (O(n2)) • 5-7 parts per model • many training examples needed • learning expensive • Manual tuning of parameters • Rigid planar objects • Sensitive to clutter -

  7. How to adapt the constellation model to our needs ?

  8. Reducing degrees of freedom 1.Common reference frame ([Lowe’99],[Huttenlocher’90]) model m position of model m 2. Share parameters ([Schmid’97]) 3. Use prior information learned on foreground and background ([FeiFei’03])

  9. Parameters and priors Foreground Clutter Constellation model Gaussian shape pdf Gaussian relative scale pdf Gaussian part appearance pdf Gaussian background appearance pdf log(scale) Prob. of detection 0.8 0.9 0.75 0.8 Foreground Clutter Sharing parameters Gaussian conditional shape pdf Gaussian part appearance pdf Gaussian relative scale pdf Gaussian background appearance pdf log(scale) Prob. of detection 0.2 0.2 0.2 Based on [Fergus’03][Burl’98] 0.8

  10. Hypotheses – features assignments New scene (test image) Interpretation . . . . . . = models from database

  11. Hypotheses – model position Models from database 1 New scene (test image) 2 3 Θ = affine transformation

  12. Score of a hypothesis observed features geometry + appearance Hypothesis: model + position + assignments database of models (Bayes rule) constant Consistency Hypothesis probability

  13. Score of a hypothesis • Consistency between observations and hypothesis foreground features ‘null’ assignments geometry appearance geometry appearance • Probability of number of clutter detections • Probability of detecting the indicated model features • Prior on the pose of the given model

  14. Efficient matching process

  15. Scheduling – inspired from A* scene features, no assignment done empty hypothesis ‘null’ assignment … 1 assignment … … 2 assignments P P P P P P P P perfect completion (admissible heuristic, used as a guide for the search) can be compared Score P P P P • Increase computational efficiency: • at each node, searches only a fixed number • of sub-branches • forces termination • explore most promising branches first Pearl’84,Grimson’87

  16. Recognition: the first match No clue regarding geometry  first match based on appearance best match Initialization of hypotheses queue features P P P P P second best match P P P P P P P P P P models from database …. …. New scene

  17. Scheduling – promising branches first Updated hypotheses queue features P P P P ? P P P P P P models from database …. …. New scene

  18. Experiments

  19. Toys database – models 153 model images

  20. Toys database – test images (scenes) - 90 test images - multiple objects or different view of model

  21. Kitchen database – models 100 model images

  22. Kitchen database – test images • 80 test images • 0-9 models / test image

  23. Examples Our system Lowe’s method Test image Identified model Identified model Test image Lowe’s model implemented using [Lowe’97,’99,’01,’03]

  24. Performance evaluation a. Object found, correct pose  Detection b. Object found, incorrect pose  False alarm c. Wrong object found  False alarm d. Object not found  Non detection Test image hand-labeled before the experiments

  25. Results – Toys images • 153 model images • 90 test images • 0-5 models / test image Scenes (test images) Models (database) • 80% recognition with false alarms / test set = 0.2 • Lower false alarm rate than Lowe’s system.

  26. Results – Kitchen images • 100 training images • 80 test images • 0-9 models / test image • 254 objects to be detected • Achieves 77% recognition rate with 0 false alarms

  27. Conclusions • Unified treatment • Best of both worlds • Probabilistic interpretation of Lowe [‘99,’04]. • Extension of [Burl,Weber,Fergus ‘96-’03] to many-features, many-models, one-shot learning. • Higher performance • Comparison with Lowe [‘99,’04]. • Future work: categories

More Related