310 likes | 680 Views
Near Duplicate Image Detection: min-Hash and tf-idf weighting. Ond ř ej Chum Center for Machine Perception Czech Technical University in Prague co-authors: James Philbin and Andrew Zisserman. Outline. Near duplicate detection and large databases
E N D
Near Duplicate Image Detection:min-Hash and tf-idf weighting Ondřej Chum Center for Machine Perception Czech Technical University in Prague co-authors: James Philbin and Andrew Zisserman
Outline • Near duplicate detection and large databases (find all groups of near duplicate images in a database) • min-Hash review • Novel similarity measures • Results on TrecVid 2006 • Results on the University of Kentucky database (Nister & Stewenius) • Beyond near duplicates
Scalable Near Duplicate Image Detection • Images perceptually (almost) identical but not identical (noise, compression level, small motion, small occlusion) • Similar images of the same object / scene • Large databases • Fast – linear in the number of duplicates • Store small constant amount of data per image
Image Representation 2 1 0 0 4 1 0 0 ... ... SIFT descriptor [Lowe’04] Feature detector Vector quantization … Visual vocabulary Bag of words Set of words
min-Hash Min-Hash is a locality sensitive hashing (LSH) function m that selects elements m(A1) from set A1 and m(A2) from set A2 so that P{m(A1) == m(A2)} = sim (A1 , A2) Image similarity measured as a set overlap (using min-Hash algorithm) Spatially related images share visual words A1∩ A2 A1 A2 A1 U A2
0.41 0.19 0.31 0.90 0.22 0.94 0.59 0.55 0.88 0.75 0.63 0.07 3 6 2 5 4 1 C C F 1 2 6 3 5 4 B A A f3: 3 2 1 6 4 5 C C A f4: 4 3 5 6 1 2 B B E min-Hash Set C Vocabulary Set A Set B A B C D E F A B C B C D A E F Ordering min-Hash f1: ~ Un (0,1) f2: ~ Un (0,1) overlap (A,C) = 1/4 (1/5) overlap (B,C) = 0 (0) overlap (A,B) = 3/4 (1/2)
min-Hash Retrieval A B Sketch collision } } A A sketch s-tuple of min-Hashes s – size of the sketch k – number of hash tables Probability of sketch collision Q C ... } } V V sim(A, B)s E E } } Probability of retrieval (at least one sketch collision) J Z Y Q 1 – (1 - sim(A, B)s)k k hash tables
Probability of Retrieving an Image Pair Images of the same object Near duplicate images s = 3, k = 512 probability of retrieval Unrelated images similarity (set overlap)
2 # documents idfW = log 0 # docs containing XW 4 0 ... t Document / Image / Object Retrieval Term Frequency – Inverse Document Frequency (tf-idf) weighting scheme [1] Baeza-Yates, Ribeiro-Neto. Modern Information Retrieval. ACM Press, 1999. [2] Sivic, Zisserman. Video Google: A text retrieval approach to object matching in videos. ICCV’03. [3] Nister, Stewenius. Scalable recognition with a vocabulary tree. CVPR’06. [4] Philbin, Chum, Isard, Sivic, Zisserman. Object retrieval with large vocabularies and fast spatial matching. CVPR’07. Words common to many documents are less informative Frequency of the words is recorded (good for repeated structures, textures, etc…)
More Complex Similarity Measures • Set of words representation • Different importance of visual words • importance dw of word Xw • Bag of words representation • (frequency is recorded) • Histogram intersection similarity measure • Different importance of visual words • importance dw of word Xw
A C E J Q R V Y Z AUB: dA dC dE dJ dQ dR dV dY dZ Word Weighting for min-Hash For hash function (set overlap similarity) all words Xw have the same chance to be a min-Hash For hash function the probability of Xw being a min-Hash is proportional to dw
A B C D A1 A2 B1 B2 C1 C3 D1 C2 C3 A2 C2 C3 A1 B1 C1 B2 C2 B1 C1 D1 A1 A2 B1 B2 C1 C3 D1 C2 Histogram Intersection Using min-Hash Idea: represent a histogram as a set, use min-Hash set machinery Visual words: Bag of words A / set A’ Bag of words B / set B’ tA = (2,1,3,0) tB = (0,2,3,1) min-Hash vocabulary: A’UB’: Set overlap of A’ofB’ is a histogram intersection of A and B
Results • Quality of the retrieval • Speed – the number of documents considered as near-duplicates
TRECVid Challange • 165 hours of news footage, different channels, different countries • 146,588 key-frames, 352×240 pixels • No ground truth on near duplicates
Min-Hash on TrecVid • DoG features • vocabulary of 64,635 visual words • 192 min-Hashes, 3 min-Hashes per a sketch, 64 sketches • similarity threshold 35% • Examples of images with 24 – 45 near duplicates • # common results / set overlap only / weighted set overlap only • Quality of the retrieval appears to be similar
Comparison of Similarity Measures Images only sharing uninformative visual words do not generate sketch collisions for the proposed similarity measures Number of sketch collisions Set overlap Weighted set overlap Weighted histogram Image pair similarity
University of Kentucky Dataset • 10,200 images in groups of four • Querying by each image in turn • Average number of correct retrievals in top 4 is measured
Evaluation Vocabulary sizes 30k and 100k Number of min-Hashes 512, 640, 768, and 896 2 min-Hashes per sketch Number of sketches 0.5, 1, 2, and 3 times the number of min-Hashes Score on average: weighted histogram intersection 4.6 % better than weighted set overlap weighted set overlap 1.5 % better than set overlap Number of considered documents on average: weighted histogram intersection 1.7 times less than weighted set overlap weighted set overlap 1.5 times less than set overlap Absolute numbers for weighted histogram intersection: Retrieval tf-idf flat scoring [Nister & Stewenius] score 3.16 Number of considered documents (non-zero tf-idf) 10,089.9 (30k) and 9,659.4 (100k)
Query Examples Query image: Results Set overlap, weighted set overlap, weighted histogram intersection
Discovery of Spatially Related Images Find and match ALL groups (clusters) of spatially related images in a large database, using only visual information, i.e. not using (flicker) tags, EXIF info, GPS, …. Chum, Matas: Large Scale Discovery of Spatially Related Images, TR May 2008 available at http://cmp.felk.cvut.cz/~chum/Publ.htm
Probability of Retrieving an Image Pair Images of the same object Near duplicate images probability of retrieval similarity (set overlap)
Image Clusters as Connected Components • Randomized clustering method: • Seed Generation – hashing (fast, low recall) • characterize images by pseudo-random numbers stored in a • hash table time complexity equal to the sum of second • moments of Poisson random variable -- linear for database • size D ≈ 240 • 2. Seed Growing – retrieval (thorough – high recall) • complete the clusters only for cluster members c << D, • complexity O(cD)
Clustering of 100k Images Images downloaded from FLICKR Includes 11 Oxford Landmarks with manually labelled ground truth All Soul's Hertford Ashmolean Keble Balliol Magdalen Bodleian Pitt Rivers Christ Church Radcliffe Camera Cornmarket
Results on 100k Images Component Recall (CR) Number of images: 104,844 Timing: 17 min + 16 min = 0.019 sec / image Chum, Matas TR, May 2008
Results on 100k Images Component Recall (CR) Number of images: 104,844 Timing: 17 min + 16 min = 0.019 sec / image 5,062 ? Philbin, Sivic, Zisserman BMVC 2008 Chum, Matas TR, May 2008
Conclusions • New similarity measures were derived for the min-Hash framework • Weighted set overlap • Histogram intersection • Weighted histogram intersection • Experiments show that the similarity measures are superior to the state of the art • in the quality of the retrieval (up to 7% on University of Kentucky dataset) • in the speed of the retrieval (up to 2.5 times) • min-Hash is a very useful tool for randomized image clustering