1 / 30

Review: Intro to recognition

Review: Intro to recognition. Recognition tasks Machine learning approach: training, testing, generalization Example classifiers Nearest neighbor Linear classifiers. Image features. Spatial support:. Pixel or local patch. Segmentation region. Bounding box. Whole image. Image features.

aspen
Download Presentation

Review: Intro to recognition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Review: Intro to recognition • Recognition tasks • Machine learning approach: training, testing, generalization • Example classifiers • Nearest neighbor • Linear classifiers

  2. Image features • Spatial support: Pixel or local patch Segmentation region Bounding box Whole image

  3. Image features • We will focus mainly on global image features for whole-image classification tasks • GIST descriptors • Bags of features • Spatial pyramids

  4. GIST descriptors • Oliva & Torralba (2001) http://people.csail.mit.edu/torralba/code/spatialenvelope/

  5. Bags of features

  6. Origin 1: Texture recognition • Texture is characterized by the repetition of basic elements or textons • For stochastic textures, it is the identity of the textons, not their spatial arrangement, that matters Julesz, 1981; Cula & Dana, 2001; Leung & Malik 2001; Mori, Belongie & Malik, 2001; Schmid 2001; Varma & Zisserman, 2002, 2003; Lazebnik, Schmid & Ponce, 2003

  7. Origin 1: Texture recognition histogram Universal texton dictionary Julesz, 1981; Cula & Dana, 2001; Leung & Malik 2001; Mori, Belongie & Malik, 2001; Schmid 2001; Varma & Zisserman, 2002, 2003; Lazebnik, Schmid & Ponce, 2003

  8. Origin 2: Bag-of-words models • Orderless document representation: frequencies of words from a dictionary Salton & McGill (1983)

  9. Origin 2: Bag-of-words models US Presidential Speeches Tag Cloudhttp://chir.ag/projects/preztags/ • Orderless document representation: frequencies of words from a dictionary Salton & McGill (1983)

  10. Origin 2: Bag-of-words models US Presidential Speeches Tag Cloudhttp://chir.ag/projects/preztags/ • Orderless document representation: frequencies of words from a dictionary Salton & McGill (1983)

  11. Origin 2: Bag-of-words models US Presidential Speeches Tag Cloudhttp://chir.ag/projects/preztags/ • Orderless document representation: frequencies of words from a dictionary Salton & McGill (1983)

  12. Bag-of-features steps • Extract local features • Learn “visual vocabulary” • Quantize local features using visual vocabulary • Represent images by frequencies of “visual words”

  13. 1. Local feature extraction • Regular grid or interest regions

  14. 1. Local feature extraction Compute descriptor Normalize patch Detect patches Slide credit: Josef Sivic

  15. 1. Local feature extraction Slide credit: Josef Sivic

  16. 2. Learning the visual vocabulary Slide credit: Josef Sivic

  17. 2. Learning the visual vocabulary Clustering Slide credit: Josef Sivic

  18. 2. Learning the visual vocabulary Visual vocabulary Clustering Slide credit: Josef Sivic

  19. Review: K-means clustering • Want to minimize sum of squared Euclidean distances between features xi and their nearest cluster centers mk • Algorithm: • Randomly initialize K cluster centers • Iterate until convergence: • Assign each feature to the nearest center • Recompute each cluster center as the mean of all features assigned to it

  20. Example codebook Appearance codebook Source: B. Leibe

  21. Appearance codebook Another codebook Source: B. Leibe

  22. Bag-of-features steps • Extract local features • Learn “visual vocabulary” • Quantize local features using visual vocabulary • Represent images by frequencies of “visual words”

  23. Visual vocabularies: Details • How to choose vocabulary size? • Too small: visual words not representative of all patches • Too large: quantization artifacts, overfitting • Right size is application-dependent • Improving efficiency of quantization • Vocabulary trees (Nister and Stewenius, 2005) • Improving vocabulary quality • Discriminative/supervised training of codebooks • Sparse coding, non-exclusive assignment to codewords • More discriminativebag-of-words representations • Fisher Vectors (Perronnin et al., 2007), VLAD (Jegou et al., 2010) • Incorporating spatial information

  24. Bags of features for action recognition Space-time interest points Juan Carlos Niebles, Hongcheng Wang and Li Fei-Fei, Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words, IJCV 2008.

  25. Bags of features for action recognition Juan Carlos Niebles, Hongcheng Wang and Li Fei-Fei, Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words, IJCV 2008.

  26. level 0 Spatial pyramids Lazebnik, Schmid & Ponce (CVPR 2006)

  27. level 1 Spatial pyramids level 0 Lazebnik, Schmid & Ponce (CVPR 2006)

  28. level 2 Spatial pyramids level 1 level 0 Lazebnik, Schmid & Ponce (CVPR 2006)

  29. Results: Scene category dataset Multi-class classification results(100 training images per class)

  30. Multi-class classification results (30 training images per class) Results: Caltech101 dataset

More Related