400 likes | 578 Views
Kapitel 14. Recognition Scene understanding / visual object categorization Pose clustering Object recognition by local features Image categorization Bag - of -features models Large- scale image search. TexPoint fonts used in EMF.
E N D
Kapitel 14 Recognition • Scene understanding / visualobjectcategorization • Pose clustering • Objectrecognitionbylocalfeatures • Image categorization • Bag-of-features models • Large-scaleimagesearch TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA
Scene understanding (1) • Scene categorization • outdoor/indoor • city/forest/factory/etc.
Scene understanding (2) • Object detection • find pedestrians
Scene understanding (3) sky mountain building tree building banner street lamp market people
Visual object categorization (1) ~10,000 to 30,000
Visual object categorization (4) • Recognition is all about modeling variability • camera position • illumination • shape variations • within-class variations
Visual object categorization (5) Within-class variations (why are they chairs?)
Pose clustering (1) • Working in transformation space - main ideas: • Generatemanyhypothesesoftransformationimage vs. model, eachbuiltby a tupleofimageandmodelfeatures • Correcttransformationhypothesesappearmanytimes • Main steps: • Quantize the space of possible transformations • For each tuple of image and model features, solve for the optimal transformation that aligns the matched features • Record a “vote” in the corresponding transformation space bin • Find "peak" in transformation space
Pose clustering (2) Example: Rotation only. A pair of one scene segment and one model segment suffices to generate a transformation hypothesis.
Pose clustering (3) C.F. Olson: Efficient pose clustering using a randomized algorithm. IJCV, 23(2): 131-147, 1997. 2-tuples of image and model corner points are used to generate hypotheses. (a) The corners found in an image. (b) The four best hypotheses found with the edges drawn in. The nose of the plane and the head of the person do not appear because they were not in the models.
Object recognition by local features (1) • D.G. Lowe: Distinctive image features from scale-invariant keypoints. IJCV, 60(2): 91-110, 2004 • The SIFT features of training images are extracted and stored • For a query image • Extract SIFT features • Efficient nearest neighbor indexing • Pose clusteringby Hough transform • For clusters with >2 keypoints (object hypotheses): determine the optimal affine transformation parameters by least squares method; geometry verification Input Image Stored Image
Object recognition by local features (2) Cluster of 3 corresponding feature pairs
Image categorization (1) Training Labels Training Images Training Image Features Classifier Training Trained Classifier Testing Prediction Image Features Trained Classifier Outdoor Test Image
Image categorization (2) Global histogram (distribution): • color, texture, motion, … • histogram matching distance
Image categorization (3) Cars found by color histogram matching See Chapter “Inhaltsbasierte Suche in Bilddatenbanken
Object Bag-of-features models (1) Bag of ‘words’
US Presidential Speeches Tag Cloudhttp://chir.ag/phernalia/preztags/ Bag-of-features models (2) Origin 1: Text recognition by orderless document representation (frequencies of words from a dictionary), Salton & McGill (1983)
Bag-of-features models (3) • Origin 2: Texturerecognition • Texture is characterized by the repetition of basic elements or textons • For stochastic textures, it is the identity of the textons, not their spatial arrangement, that matters
Bag-of-features models (4) histogram Universal texton dictionary
Bag-of-features models (5) We need to build a “visual” dictionary! G. Cruska et al.: Visual categorization with bags of keypoints. Proc. of ECCV, 2004.
Bag-of-features models (6) Main step 1: Extract features (e.g. SIFT) …
Bag-of-features models (7) Main step 2: Learn visual vocabulary (e.g. using clustering) clustering
Bag-of-features models (8) visual vocabulary (cluster centers)
… Bag-of-features models (9) Example: learned codebook Appearance codebook Source: B. Leibe
Bag-of-features models (10) Main step 3: Quantize features using “visual vocabulary”
Bag-of-features models (11) • Main step 4: Represent images by frequencies of “visual words” (i.e., bags of features)
Bag-of-features models (12) Example: representation based on learned codebook
Bag-of-features models (13) Main step 5: Apply any classifier to the histogram feature vectors
Bag-of-features models (14) Example:
Bag-of-features models (15) Caltech6 dataset Dictionary quality and size are very important parameters!
Bag-of-features models (16) Caltech6 dataset Action recognition J.C. Niebles, H. Wang, L. Fei-Fei: Unsupervised learning of human action categories using spatial-tempotal words. IJCV, 79(3): 299-318, 2008.
Large-scale image search (1) J. Philbin, et al.: Object retrieval with large vocabularies and fast spatial matching. Proc. of CVPR, 2007 Query Results from 5k Flickr images
Aachen Cathedral Large-scale image search (2) [Quack, Leibe, Van Gool, CIVR’08] • Mobile tourist guide • self-localization • object/building recognition • photo/video augmentation
Large-scale image search (4) Moulin Rouge Old Town Square (Prague) Tour Montparnasse Colosseum ViktualienmarktMaypole Application: Image auto-annotation Left: Wikipedia imageRight: closest match from Flickr
Sources • K. Grauman, B. Leibe: Visual Object Recognition. Morgen & Claypool Publishers, 2011 • R. Szeliski: Computer Vision: Algorithms and Applications. Springer, 2010. (Chapter 14 “Recognition”) • Course materials from others (G. Bebis, J. Hays, S. Lazebnik, …)