270 likes | 377 Views
Recognition by Probabilistic Hypothesis Construction. P. Moreels, M. Maire, P. Perona California Institute of Technology. Background. Rich features. Probabilistic constellations, categories. Efficient matching. • Fischler & Elschlager, 1973 • v.d. Malsburg et al. ‘93. • Burl et al. ‘96
E N D
Recognition by Probabilistic Hypothesis Construction P. Moreels, M. Maire, P. Perona California Institute of Technology
Background Rich features Probabilistic constellations, categories Efficient matching • Fischler & Elschlager, 1973 • v.d. Malsburg et al. ‘93 • Burl et al. ‘96 • Weber et al. ‘00 • Fergus et al. ‘03 • Huttenlocher & Ullman, 1990 Rich features, probabilistic, fast learning, efficient matching • Lowe ‘99, ‘04
Outline Objective: Individual object recognition • D.Lowe, constellation model. • Hypothesis and score. • Scheduling of matches. • Experiments: compare with D.Lowe.
Lowe’s recognition system Models Test image … Lowe’99,’04
Constellation model Burl’96, Weber’00, Fergus’03
Principled detection/recognition Learn parameters from data Model clutter, occlusion, distortions Pros and Cons Lowe’s recognition system Constellation model • Many parts redundancy • Learn from 1 image • Fast + • High number of parameters (O(n2)) • 5-7 parts per model • many training examples needed • learning expensive • Manual tuning of parameters • Rigid planar objects • Sensitive to clutter -
Reducing degrees of freedom 1.Common reference frame ([Lowe’99],[Huttenlocher’90]) model m position of model m 2. Share parameters ([Schmid’97]) 3. Use prior information learned on foreground and background ([FeiFei’03])
Parameters and priors Foreground Clutter Constellation model Gaussian shape pdf Gaussian relative scale pdf Gaussian part appearance pdf Gaussian background appearance pdf log(scale) Prob. of detection 0.8 0.9 0.75 0.8 Foreground Clutter Sharing parameters Gaussian conditional shape pdf Gaussian part appearance pdf Gaussian relative scale pdf Gaussian background appearance pdf log(scale) Prob. of detection 0.2 0.2 0.2 Based on [Fergus’03][Burl’98] 0.8
Hypotheses – features assignments New scene (test image) Interpretation . . . . . . = models from database
Hypotheses – model position Models from database 1 New scene (test image) 2 3 Θ = affine transformation
Score of a hypothesis observed features geometry + appearance Hypothesis: model + position + assignments database of models (Bayes rule) constant Consistency Hypothesis probability
Score of a hypothesis • Consistency between observations and hypothesis foreground features ‘null’ assignments geometry appearance geometry appearance • Probability of number of clutter detections • Probability of detecting the indicated model features • Prior on the pose of the given model
Scheduling – inspired from A* scene features, no assignment done empty hypothesis ‘null’ assignment … 1 assignment … … 2 assignments P P P P P P P P perfect completion (admissible heuristic, used as a guide for the search) can be compared Score P P P P • Increase computational efficiency: • at each node, searches only a fixed number • of sub-branches • forces termination • explore most promising branches first Pearl’84,Grimson’87
Recognition: the first match No clue regarding geometry first match based on appearance best match Initialization of hypotheses queue features P P P P P second best match P P P P P P P P P P models from database …. …. New scene
Scheduling – promising branches first Updated hypotheses queue features P P P P ? P P P P P P models from database …. …. New scene
Toys database – models 153 model images
Toys database – test images (scenes) - 90 test images - multiple objects or different view of model
Kitchen database – models 100 model images
Kitchen database – test images • 80 test images • 0-9 models / test image
Examples Our system Lowe’s method Test image Identified model Identified model Test image Lowe’s model implemented using [Lowe’97,’99,’01,’03]
Performance evaluation a. Object found, correct pose Detection b. Object found, incorrect pose False alarm c. Wrong object found False alarm d. Object not found Non detection Test image hand-labeled before the experiments
Results – Toys images • 153 model images • 90 test images • 0-5 models / test image Scenes (test images) Models (database) • 80% recognition with false alarms / test set = 0.2 • Lower false alarm rate than Lowe’s system.
Results – Kitchen images • 100 training images • 80 test images • 0-9 models / test image • 254 objects to be detected • Achieves 77% recognition rate with 0 false alarms
Conclusions • Unified treatment • Best of both worlds • Probabilistic interpretation of Lowe [‘99,’04]. • Extension of [Burl,Weber,Fergus ‘96-’03] to many-features, many-models, one-shot learning. • Higher performance • Comparison with Lowe [‘99,’04]. • Future work: categories