700 likes | 856 Views
The Future of Image Search. Jitendra Malik UC Berkeley. The Motivation…. There are now billions of images on the web and in collections such as Flickr. Suppose I want to find pictures of monkeys. Google Image Search -- monkey. Google Image Search -- monkey. Google Image Search -- monkey.
E N D
The Future of Image Search Jitendra Malik UC Berkeley
The Motivation… • There are now billions of images on the web and in collections such as Flickr. • Suppose I want to find pictures of monkeys..
Google Image Search -- monkey Words are not enough…
Flickr Search for tag monkey Even with humans doing the labeling, the data is extremely noisy -- context, polysemy, photo sets Tags are not enough either!
Content based Image Retrieval circa 1990s QBIC, IBM 1993 Color & Color Layout VisualSeek Smith & Chang, 1996 Walrus - Natsev et al 1999 Region color and texture Blobworld - Carson et al 1999 NeTra - Ma et al 1999
Color and Texture models Color Features: Histogram of what colors appear in the image Texture Features: Histograms of 16 filters = *
The Semantic Gap • First generation CBIR systems were based on color and texture; however these do not capture what users really care about : conceptual or semantic categories. • Perception studies suggest that the most important cue to visual categorization is shape. This was ignored in earlier work (because it was hard!) • Over the last 5 -10 years, we have seen rapid progress in capturing shape.
The Research Program.. • Automatically generate annotations corresponding to object labels or activities in video • Combine these with other metadata such as text
Water back Grass Tiger Tiger Sand head eye legs tail mouse shadow From Pixels to Perception outdoor wildlife
Modeling shape variation in a category • D’Arcy Thompson: On Growth and Form, 1917 studied transformations between shapes of organisms
Attneave’s Cat (1954)Line drawings convey most of the information
Taxonomy and Partonomy • Taxonomy: E.g. Cats are in the order Felidae which in turn is in the class Mammalia • Recognition can be at multiple levels of categorization, or be identification at the level of specific individuals , as in faces. • Partonomy: Objects have parts, they have subparts and so on. The human body contains the head, which in turn contains the eyes. Also true of scenes. • Psychologists have argued that there is a “basic-level” at which categorization is fastest (Eleanor Rosch et al). Biederman has estimated the number of basic visual categories as ~ 30 K • In a partonomy each level contributes useful information for recognition.
Matching with Exemplars • Use exemplars as templates • Correspond features between query and exemplar • Evaluate similarity score Database of Templates Query Image
Matching with Exemplars • Use exemplars as templates • Correspond features between query and exemplar • Evaluate similarity score Database of Templates Query Image Best matching template is a helicopter
3D objects using multiple 2D views View selection algorithm from Belongie, Malik & Puzicha (2001)
Three Big Ideas • Correspondence based on local shape/appearance descriptors • Deformable Template Matching • Machine learning for finding discriminative features
Shape Context Count the number of points inside each bin, e.g.: Count = 4 ... Count = 10 • Compact representation of distribution of points relative to each point (Belongie, Malik & Puzicha, 2001)
Geometric Blur(Local Appearance Descriptor) Berg & Malik '01 Compute sparse channels from image Extract a patch in each channel Apply spatially varying blur and sub-sample ~ Descriptor is robust to small affine distortions Geometric Blur Descriptor (Idealized signal)
Three Big Ideas • Correspondence based on local shape/appearance descriptors • Deformable Template Matching • Machine learning for finding discriminative features
Modeling shape variation in a category • D’Arcy Thompson: On Growth and Form, 1917 • studied transformations between shapes of organisms
MatchingExample model target
Handwritten Digit Recognition • MNIST 600 000 (distortions): • LeNet 5: 0.8% • SVM: 0.8% • Boosted LeNet 4: 0.7% • MNIST 60 000: • linear: 12.0% • 40 PCA+ quad: 3.3% • 1000 RBF +linear: 3.6% • K-NN: 5% • K-NN (deskewed): 2.4% • K-NN (tangent dist.): 1.1% • SVM: 1.1% • LeNet 5: 0.95% • MNIST 20 000: • K-NN, Shape Context matching: 0.63%
171 of 192 images correctly identified: 92 % EZ-Gimpy Results horse spade smile join canvas here
Three Big Ideas • Correspondence based on local shape/appearance descriptors • Deformable Template Matching • Machine learning for finding discriminative features
83/400 79/400 Discriminative learning(Frome, Singer, Malik, 2006) • weights on patch features in training images • distance functions from training images to any other images • browsing, retrieval, classification
want: image j image k image i triplets • learn from relative similarity compare image-to-imagedistances image-to-image distances based on feature-to-image distances
dij dik xijk ... 0.5 -0.2 focal image version image k ... image i (focal) 0.2 0.8 0.2 - 0.8 image j 0.4 0.3 ... 0.3 0.4 =
large-margin formulation • slack variables like soft-margin SVM • w constrained to be positive • L2 regularization
Caltech-101 [Fei-Fei et al. 04] • 102 classes, 31-300 images/class
retrieval results: retrieval example query image
Caltech 101 classification results(Combining classifiers does better still - Verma & Ray)
So, what is missing? • These are isolated objects on simple backgrounds; real objects are part of scenes. • The general case has been solved for some categories e.g. faces.
Face Detection Schneiderman & Kanade (CMU), 2000… Results on various images submitted to the CMU on-line face detector
Detection: Is this an X? Ask this question over and over again, varying position, scale, multiple categories… Speedups: hierarchical, early reject, feature sharing, cueing but same underlying question!
Detection: Is this an X? Ask this question over and over again, varying position, scale, multiple categories… Speedups: hierarchical, early reject, feature sharing, but same underlying question!
Detection: Is this an X? Boosted dec. trees, cascades + Very fast evaluation - Slow training (esp. multi-class) Linear SVM + Fast evaluation + Fast training - Need to find good features Non-linear kernelized SVM + Better class. acc. than linear . Medium training - Slow evaluation Ask this question over and over again, varying position, scale, multiple categories… Speedups: hierarchical, early reject, feature sharing, but same underlying question!
Detection: Is this an X? Boosted dec. trees, cascades + Very fast evaluation - Slow training (esp. multi-class) Linear SVM + Fast evaluation + Fast training - Need to find good features Non-linear kernelized SVM + Better class. acc. than linear . Medium training - Slow evaluation This work Ask this question over and over again, varying position, scale, multiple categories… Speedups: hierarchical, early reject, feature sharing, but same underlying question!
Classification Using Intersection Kernel Support Vector Machines is efficient. Subhransu Maji and Alexander C. Berg and Jitendra Malik. Proceedings of CVPR 2008, Anchorage, Alaska, June 2008. Software and more results available at http://www.cs.berkeley.edu/~smaji/projects/fiksvm/
Linear Separators (aka. Perceptrons) Support Vector Machines
Other possible solutions Support Vector Machines
Which one is better? B1 or B2? How do you define better? Support Vector Machines