210 likes | 389 Views
Learning Image Similarities via Probabilistic Feature Matching. Ziming Zhang * , Ze-Nian Li, Mark Drew School of Computing Science, Simon Fraser University, Vancouver , B.C., Canada {zza27, li , mark}@ cs.sfu.ca. Outline. Introduction Probabilistic-matching based similarity learning
E N D
Learning Image Similarities via Probabilistic Feature Matching Ziming Zhang*, Ze-Nian Li, Mark Drew School of Computing Science, Simon Fraser University, Vancouver, B.C., Canada {zza27, li, mark}@cs.sfu.ca *This work was done when the author was in SFU.
Outline • Introduction • Probabilistic-matching based similarity learning • Probabilistic feature matching function • Probabilistic feature matching learning • Experiments • Conclusion
Introduction • Object-based image similarity • Ideally two images are supposed to have a higher similarity if they contain similar objects. • Feature matching • A natural way to measure the image similarities • Many different criteria • In this talk, we simply match features only based on their appearance information
Introduction • Several relevant feature matching approaches • Summation kernel • Max-selection kernel • Optimal assignment kernel
Introduction • Our approach is a generalization of a family of SIMILARITY learning approaches, including the three above. • Similarity matrix ≠ Kernel • A kernel matrix can be considered as a special similarity matrix (i.e. symmetric positive semi-definite) • Classification with Support Vector Machines (SVM)
Probabilistic-matching based similarity learning • How to learn these feature matching probabilities?
Feature Matching Function • Given two images X={x1,…,x|X|} and Y={y1,…,y|Y|}, a feature matching function α can be defined as • Explanations of matching processes in the Summation Kernel, Max-selection Kernel, and Optimal Assignment Kernel using feature matching functions • Ksum: • Kmax: • KOA :
Probabilistic Feature Matching Function • Our probabilisticfeature matching function α is defined in the vector space covered by the following convex set: Total matching probability of one feature Total matching probability of all features Each matching probability
Probabilistic Feature Matching Learning • Data-dependent optimization Distributionsparseness (or Regulizer) Image similarity
Probabilistic Feature Matching Learning • Theorems • Consider maxf(x) over xϵX, where f(x) is convex, and X is a closed convex set. If the optimum exists, a boundary point of X is the optimum. • If a convex function f(x) attains its maximum on a convex polyhedron X with some extreme points, then this maximum is attained at an extreme point of X. • Relation to Ksum, Kmax, and KOA • Ksum: C=+∞ and H={i,j} • Kmax: C=0 and H={i}, and C=0 and H={j} • KOA: C=0 and H={i,j}
Probabilistic Feature Matching Learning • Proposition • For two images X and Y , both the sparseness of αand their similarity will decrease monotonically with increasing the parameter C .
Experiments • Datasets: Graz • Descriptor • SIFT + dense sampling • Image Representation • 3*3 spatial Bag-of-Word histograms with 200 codewords • Feature similarity: RBF-kernel with χ2 distance • 50 runs Graz-01 Graz-02
Experiments • Graz-01 (b) PFM2 with H={i} or H={j} (c) PFM3 with H=ф (a) PFM1 with H={i,j}
Experiments Table 1. Comparison results between different approaches on Graz-01 (%) [1] Lazebnik et. al., “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” in CVPR’06. [2] Ling and Soatto, “Proximity distribution kernels for geometric context in category recognition,” in ICCV’07.
Experiments • Graz-02 (a) PFM1 with H={i,j} (b) PFM2 with H={i} or H={j} (c) PFM3 with H=ф
Experiments Table 2. Comparison results between different approaches on Graz-02 (%) Opelt et. al., “Generic object recognition with boosting,” PAMI, 2006.
Conclusion • Probabilistic feature matching scheme • A generalization of a rich family of probabilistic feature matching approaches • Easy to control the sparseness of the matching probability distributions and their corresponding image similarities