360 likes | 484 Views
On Visual Recognition. Jitendra Malik UC Berkeley. Water. back. Grass. Tiger. Tiger. Sand. head. eye. legs. tail. mouse. shadow. From Pixels to Perception. outdoor wildlife. Object Category Recognition. Defining Categories. What is a “visual category”? Not semantic
E N D
On Visual Recognition Jitendra Malik UC Berkeley
Water back Grass Tiger Tiger Sand head eye legs tail mouse shadow From Pixels to Perception outdoor wildlife
Defining Categories • What is a “visual category”? • Not semantic • Working hypothesis: Two instances of the same category must have “correspondence” (i.e. one can be morphed into the other) • e.g. Four-legged animals • Biederman’s estimate of 30,000 basic visual categories
Facts from Biological Vision • Timing • Abstraction/Generalization • Taxonomy and Partonomy
Detection can be very fast • On a task of judging animal vs no animal, humans can make mostly correct saccades in 150 ms (Kirchner & Thorpe, 2006) • Comparable to synaptic delay in the retina, LGN, V1, V2, V4, IT pathway. • Doesn’t rule out feed back but shows feed forward only is very powerful
As Soon as You Know It Is There, You Know What It Is Grill-Spector & Kanwisher, Psychological Science, 2005
Abstraction/Generalization • Configurations of oriented contours • Considerable toleration for small deformations
Attneave’s Cat (1954)Line drawings convey most of the information
Taxonomy and Partonomy • Taxonomy: E.g. Cats are in the order Felidae which in turn is in the class Mammalia • Recognition can be at multiple levels of categorization, or be identification at the level of specific individuals , as in faces. • Partonomy: Objects have parts, they have subparts and so on. The human body contains the head, which in turn contains the eyes. • These notions apply equally well to scenes and to activities. • Psychologists have argued that there is a “basic-level” at which categorization is fastest (Eleanor Rosch et al). • In a partonomy each level contributes useful information fro recognition.
Matching with Exemplars • Use exemplars as templates • Correspond features between query and exemplar • Evaluate similarity score Database of Templates Query Image
Matching with Exemplars • Use exemplars as templates • Correspond features between query and exemplar • Evaluate similarity score Database of Templates Query Image Best matching template is a helicopter
3D objects using multiple 2D views View selection algorithm from Belongie, Malik & Puzicha (2001)
Three Big Ideas • Correspondence based on local shape/appearance descriptors • Deformable Template Matching • Machine learning for finding discriminative features
Three Big Ideas • Correspondence based on local shape/appearance descriptors • Deformable Template Matching • Machine learning for finding discriminative features
Shape Context Count the number of points inside each bin, e.g.: Count = 4 ... Count = 10 • Compact representation of distribution of points relative to each point (Belongie, Malik & Puzicha, 2001)
Geometric Blur(Local Appearance Descriptor) Berg & Malik '01 Compute sparse channels from image Extract a patch in each channel Apply spatially varying blur and sub-sample ~ Descriptor is robust to small affine distortions Geometric Blur Descriptor (Idealized signal)
Three Big Ideas • Correspondence based on local shape/appearance descriptors • Deformable Template Matching • Machine learning for finding discriminative features
Modeling shape variation in a category • D’Arcy Thompson: On Growth and Form, 1917 • studied transformations between shapes of organisms
MatchingExample model target
Handwritten Digit Recognition • MNIST 600 000 (distortions): • LeNet 5: 0.8% • SVM: 0.8% • Boosted LeNet 4: 0.7% • MNIST 60 000: • linear: 12.0% • 40 PCA+ quad: 3.3% • 1000 RBF +linear: 3.6% • K-NN: 5% • K-NN (deskewed): 2.4% • K-NN (tangent dist.): 1.1% • SVM: 1.1% • LeNet 5: 0.95% • MNIST 20 000: • K-NN, Shape Context matching: 0.63%
171 of 192 images correctly identified: 92 % EZ-Gimpy Results horse spade smile join canvas here
Three Big Ideas • Correspondence based on local shape/appearance descriptors • Deformable Template Matching • Machine learning for finding discriminative features
83/400 79/400 Discriminative learning(Frome, Singer, Malik, 2006) • weights on patch features in training images • distance functions from training images to any other images • browsing, retrieval, classification
want: image j image k image i triplets • learn from relative similarity compare image-to-imagedistances image-to-image distances based on feature-to-image distances
dij dik xijk ... 0.5 -0.2 focal image version image k ... image i (focal) 0.2 0.8 0.2 - 0.8 image j 0.4 0.3 ... 0.3 0.4 =
large-margin formulation • slack variables like soft-margin SVM • w constrained to be positive • L2 regularization
Caltech-101 [Fei-Fei et al. 04] • 102 classes, 31-300 images/class
retrieval results: retrieval example query image
Caltech 101 classification results(see Manik Verma’s talks for the best yet..)
Conclusion • Correspondence based on local shape/appearance descriptors • Deformable Template Matching • Machine learning for finding discriminative features • Integrating Perceptual Organization and Recognition