420 likes | 680 Views
Data-dependent Hashing for Similarity Search. Alex Andoni (Columbia University). Nearest Neighbor Search (NNS). Preprocess: a set of points Query: given a query point , report a point with the smallest distance to. Motivation. Generic setup: Points model objects (e.g. images)
E N D
Data-dependent Hashing forSimilarity Search Alex Andoni (Columbia University)
Nearest Neighbor Search (NNS) • Preprocess: a set of points • Query: given a query point , report a point with the smallest distance to
Motivation • Generic setup: • Points model objects (e.g. images) • Distance models (dis)similarity measure • Application areas: • machine learning: k-NN rule • speech/image/video/music recognition, vector quantization, bioinformatics, etc… • Distances: • Hamming, Euclidean, edit distance, earthmover distance, etc… • Core primitive: closest pair, clustering, etc… 000000 011100 010100 000100 010100 011111 000000 001100 000100 000100 110100 111111
Curse of Dimensionality • Hard when Refutes Strong ETH [Wil’04]
Approximate NNS • -near neighbor: given a query point , report a point s.t. • as long as there is some point within distance • Practice: use for exact NNS • Filtering: gives a set of candidates(hopefully small) -approximate
Plan • Space decompositions • Structure in data • PCA-tree • No structure in data • Data-dependent hashing • Towards practice
Plan • Space decompositions • Structure in data • PCA-tree • No structure in data • Data-dependent hashing • Towards practice
It’s all about space decompositions Approximate space decompositions • [Arya-Mount’93], [Clarkson’94], [Arya-Mount-Netanyahu-Silverman-We’98], [Kleinberg’97], [Har-Peled’02],[Arya-Fonseca-Mount’11],… Random decompositions == hashing • [Kushilevitz-Ostrovsky-Rabani’98], [Indyk-Motwani’98], [Indyk’01], [Gionis-Indyk-Motwani’99], [Charikar’02], [Datar-Immorlica-Indyk-Mirrokni’04], [Chakrabarti-Regev’04], [Panigrahy’06], [Ailon-Chazelle’06], [A.-Indyk’06], [Terasawa-Tanaka’07], [Kapralov’15], [Pagh’16], [Becker-Ducas-Gama-Laarhoven’16], [Christiani’17], … for low-dimensional for high-dimensional
Low-dimensional • kd-trees, approximate Voronoi,… • Usual factor:
High-dimensional • Locality-Sensitive Hashing (LSH) • Space partitions: fully random • Johnson-Lindenstrauss Lemma: project to random subspace • LSH runtime: for approximation
Locality-Sensitive Hashing [Indyk-Motwani’98] Random hash function on satisfying: • forclosepair: when is “high” • forfarpair: when is “small” Use several hash tables: “not-so-small” 1 ,where
LSH Algorithms Hamming space Euclidean space
vs Practice My dataset is not worse-case • Data-dependent partitions • optimize the partition to yourdataset • PCA-tree [S’91, M’01, VKD’09] • randomized kd-trees [SAH08, ML09] • spectral/PCA/semantic/WTA hashing [WTF08, CKW09, SH09, YSRL11] • …
Practice vs Theory • Data-dependent partitions often outperform (vanilla) random partitions • But no guarantees (correctness or performance) • Hard to measure progress, understand trade-offs • JL generally optimal [Alo’03, JW’13] • Even for some NNS regimes! [AIP’06] Why do data-dependent partitions outperform random ones ?
Plan • Space decompositions • Structure in data • PCA-tree • No structure in data • Data-dependent hashing • Towards practice
How to study phenomenon: • Because of furtherspecial structure in the dataset • “Intrinsic dimension is small” • [Clarkson’99,Karger-Ruhl’02, Krauthgamer-Lee’04, Beygelzimer-Kakade-Langford’06, Indyk-Naor’07, Dasgupta-Sinha’13,…] • But, dataset is really high-dimensional… Why do data-dependent partitions outperform random ones ?
Towards higher-dimensional datasets [Abdullah-A.-Krauthgamer-Kannan’14] • “low-dimensional signal + high-dimensional noise” • Signal: where of dimension • Data: each point is perturbed by a full-dimensional Gaussian noise
Model properties • Data • Querys.t.: • for “nearest neighbor” • for everybody else • Noise • up to factors poly in • roughly when nearest neighbor is still the same • Noise is large: • Displacement: ~ • top dimensions of capture sub-constant mass • JL would not work: after noise, gap very close to 1
NNS in this model • Best we can hope: • dataset contains a “worst-case” -dimensional instance • Reduction from dimension to • Spoiler: Yes, we can do it! Goal: NNS performance as if we are in dimensions ?
Tool: PCA • Principal Component Analysis • like Singular Value Decomposition • Top PCA direction: direction maximizing variance of the data
Naïve: just use PCA? • Use PCA to find the “signal subspace” ? • find top-k PCA space • project everything and solve NNS there • Does NOT work • Sparsesignal directions overpowered by noise directions • PCA is good only on “average”, not “worst-case”
Our algorithms: intuition • Extract “well-captured points” • points with signal mostly inside top PCA space • should work for large fraction of points • Iterate on the rest • Algorithm1: iterative PCA • Algorithm 2: (modified) PCA-tree • Analysis: via [Davis-Kahan’70, Wedin’72]
Plan • Space decompositions • Structure in data • PCA-tree • No structure in data • Data-dependent hashing • Towards practice What if data has no structure? (worst-case)
Back to worst-case NNS Hamming space LSH Random hashing is not optimal! Euclidean space LSH
Beyond Locality Sensitive Hashing • A random hash function, chosen after seeing the given dataset • Efficiently computable Data-dependent hashing (even when no structure)
Construction of hash function [A.-Indyk-Nguyen-Razenshteyn’14] • Two components: • Nice geometric structure • Reduction to such structure • Like a (weak) “regularity lemma” for a set of points has better LSH data-dependent
Nice geometric structure • Pseudo-random configuration: • Far pair is (near) orthogonal (@distance ) as when the dataset is random on the sphere • close pair (distance ) • Lemma: • via Cap Carving
Reduction to nice configuration [A.-Razenshteyn’15] • Idea: iteratively make the pointset more isotopic by narrowing a ball around it • Algorithm: • find dense clusters • with smaller radius • large fraction of points • recurse on dense clusters • apply Cap Hashing on the rest • recurse on each “cap” • eg, dense clusters might reappear • Why ok? • nodense clusters • remainder near- orthogonal (radius=) • even better! radius = radius = *picture not to scale & dimension
Hash function • Described by a tree (like a hash table) radius = *picture not to scale&dimension
Dense clusters • Current dataset: radius • A dense cluster: • Contains points • Smaller radius: • After we remove all clusters: • For any point on the surface, there are at most points within distance • The other points are essentially orthogonal ! • When applying Cap Carving with parameters : • Empirical number of far pts colliding with query: • As long as , the “impurity” doesn’t matter! trade-off trade-off ?
Tree recap • During query: • Recurse in all clusters • Just in one bucket in CapCarving • Will look in >1 leaf! • How much branching? • Claim: at most • Each time we branch • at most clusters () • a cluster reduces radius by • cluster-depth at most • Progress in 2 ways: • Clusters reduce radius • CapCarving nodes reduce the # of far points (empirical ) • A tree succeeds with probability trade-off
Plan • Space decompositions • Structure in data • PCA-tree • No structure in data • Data-dependent hashing • Towards practice
Towards (provable) practicality 1 • Two components: • Nice geometric structure: spherical case • Reduction to such structure practical variant [A-Indyk-Laarhoven-Razenshteyn-Schmidt’15]
Classic LSH: Hyperplane [Charikar’02] • Charikar’s construction: • Sample uniformly • Performance: • Query time:
Our algorithm [A-Indyk-Laarhoven-Razenshteyn-Schmidt’15] • Based on cross-polytope • introduced in [Terasawa-Tanaka’07] • Construction of : • Pick a random rotation • : • index of maximal coordinate of • and its sign • Theorem: nearly the same exponent as (optimal) CapCarving LSH for caps!
Comparison: Hyperplane vs CrossPoly • Fix probability of collision for far points • Equivalently: partition space into pieces • Hyperplane LSH: • with hyperplanes • implies parts • Exponent • CrossPolytope LSH: • with parts • Exponent
Complexity of our scheme • Time: • Rotations are expensive! • time • Idea: pseudo-random rotations • Compute • where is “fast rotation” • based on Hadamard transform • time! • Used in [Ailon-Chazelle’06, Dasgupta, Kumar, Sarlos’11, Ve-Sarlos-Smola’13,…] • Less space: • Can adapt Multi-probe LSH [Lv-Josephson-Wang-Charikar-Li’07]
Code & Experiments • FALCONN package: http://falcon-lib.org • ANN_SIFT1M data (, ) • Linear scan: 38ms • Vanilla: • Hyperplane: 3.7ms • Cross-polytope: 3.1ms • After some clustering and recentering (data-dependent): • Hyperplane: 2.75ms • Cross-polytope: 1.7ms (1.6x speedup) • Pubmed data (, ) • Use feature hashing • Linear scan: 3.6s • Hyperplane: 857ms • Cross-polytope: 213ms (4x speedup)
Towards (provable) practicality 2 • Two components: • Nice geometric structure: spherical case • Reduction to such structure a partial step [A – Razenshteyn – Shekel-Nosatzki’17]
LSH Forest++ [A.-Razenshteyn-Shekel-Nosatzki’17] • Hamming space • random trees, of height • in each node sample a random coordinate • LSH Forest++: • LSH Forest [Bawa-Condie-Ganesan’05] • In a tree, split until bucket size • ++: store points closest to the mean of P • query compared against the points • Theorem: obtains • for large enough =performance of IM98 on random data coor 3 coor 5 coor 2 coor 7
Follow-ups • Dynamic data structure? • Dynamization techniques [Overmars-van Leeuwen’81] • Better bounds? • For dimension , can get better! [Becker-Ducas-Gama-Laarhoven’16] • For dimension : our is optimal even for data-dependent hashing! [A-Razenshteyn’16]: • in the right formalization (to rule out Voronoi diagram) • Time-space trade-offs: time and space • [A-Laarhoven-Razenshteyn-Waingarten’17] • Simpler algorithm/analysis • Optimal* bound! [ALRW’17, Christiani’17]
Finale: Data-dependent hashing • A reduction from “worst-case” to “random case” • Instance-optimality (beyond-worst case) ? • Other applications? Eg, closest pair can be solved much faster for random-case[Valiant’12, Karppa-Kaski-Kohonen’16] • Tool to understand NNS for harder metrics? • Classic hard distance: • No non-trivial sketch/random hash [BJKS04, AN’12] • But: [Indyk’98] gets efficient NNS with approximation • Tight in the relevant models [ACP’08, PTW’10, KP’12] • Can be seen as data-dependent hashing! Algorithms with worst-case guarantees, but great performance if dataset is “nice” ? NNS for any norm (eg, matrix norms, EMD) or metric (edit d)?