1 / 12

Initial Presentation Wei Li ENGN 2560 March 12,2013

Initial Presentation Wei Li ENGN 2560 March 12,2013. Topic & Paper. Paper: Learning to Match Images in Large-Scale Collections. (Song Cao and Noah Snavely ) ECCV Workshop on Web-Scale Vision and Social Media , 2012 . http ://www.cs.cornell.edu/projects/matchlearn/. Target Problem.

caron
Download Presentation

Initial Presentation Wei Li ENGN 2560 March 12,2013

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Initial PresentationWei LiENGN 2560 March 12,2013

  2. Topic & Paper Paper: Learning to Match Images in Large-Scale Collections. (Song Cao and Noah Snavely)ECCV Workshop on Web-Scale Vision and Social Media, 2012. http://www.cs.cornell.edu/projects/matchlearn/

  3. Target Problem Discover its visual connectivity structure from large image datasets by finding a good approximation of the image connectivity graph as efficiently as possible. i.e. determine which images overlap which other images, in the form of an image graph, and find feature correspondence between matching images.

  4. Importance scalable image matching ? ……

  5. Goal Efficiently compute an image graph on large image datasets with edges linking overlapping images. Model (Approximation) Improve efficiency

  6. Basic Idea to achieve the goal Efficiently compute an image graph on the given a set of images using a iterative approach that learns to predict which pairs of images in an input dataset match, and which do not, using discriminative learning of BoW models and using unsupervised ways to define weights for each visual word.

  7. Improvement compared with traditional methods High weights to visual words that are more stable across viewpoint and illumination and less sensitive to quantization errors. Reduce time-consuming compared with BoW method (noisy) Small datasets can build the model (Iteratively) Free to leverage whatever structure is present in the data Features are formed from image pairs instead of individual image Overhead of learning is quite low Discover the structure of the database from scratch efficiently

  8. General Steps • Turn a collection of images to be represented as BoW histograms • Use an unsupervised similarity measure (e.g., tf-idf) to automatically generates training data by first finding a small number of image pairs with high similarity • Apply relatively expensive feature matching (e.g. SIFT) and verification steps on these pairs. The result of this is positive image pairs (successful matches) and negative pairs (unsuccessful matches). • Use discriminative learning (e.g., SVMs) to learn a new similarity measure (e.g. weights) on features derived from these example image pairs. • Iterates the discriminative learning, alternating between proposing more images to match, and learning a better similarity measure.

  9. Problems & Difficulties Concepts and algorithms needed: • L2-regularized L2-loss SVMs(Discriminative Learning ) • vanilla tf-idf weighted image • SIFT matching • BoW histogram • burstiness • co-occurrence measures • feature selection method of Turcot and Lowe • …… For the dataset: • New paper, Until now, no datasets available directly from this paper.

  10. Time Schedule Week 1 (Mar 12-17): • Matlab • Learning the concept of this paper • Find the dataset Week 2 (Mar 18-24): • BoW histogram (Model) • Continually learning the paper Week 3 (Mar 25-31): • Use tf-idf to generate training data (image pairs with high similarity), SIFT and verification steps on these pairs Week 4 (Apri 1-7): • L2-regularized L2-loss SVMs(Discriminative Learning ) to get weights (Model) Week 5 (Apri 8-14): • L2-regularized L2-loss SVMs(Discriminative Learning ) to get weights (Model) Apri 16: • Mid-Presentation

  11. Time Schedule Week 6 (Apri 15-21): • Analyze the problems & finish some staff if it is still not work Week 7 (Apri 22-28): • Analyze the problems & finish some staff if it is still not work • Do something if there is something which can be improved Week 8 (Apri 29-May 5): • Do something if there is something which can be improved Week 9 (May 6-12): • Reports &Summary May 14: • Final-Presentation

  12. Thanks

More Related