260 likes | 428 Views
Tighter local versus global properties of metric spaces. Moses Charikar. Joint work with. Konstantin Makarychev. Yury Makarychev. Princeton University. Local versus Global. Local properties: properties of subsets Global properties: properties of entire set
E N D
Tighter local versus global properties of metric spaces Moses Charikar Joint work with Konstantin Makarychev Yury Makarychev Princeton University
Local versus Global • Local properties: properties of subsets • Global properties: properties of entire set • What do local properties tell us about global properties ? • Property of interest: embeddability in normed spaces
Motivations • Natural mathematical question • Questions of similar flavor • Embedding into l2n • Characterization of tree metrics • Helly’s theorem • Ramsey theory • Graph minors work • minor exclusion is local property, what does it mean for entire graph • Property testing • infer properties of entire set from sample • Lift-and-project methods in optimization • Can guarantee local properties • Need guarantee on global property
Local versus global distortion • Metric on n points • Property : Embeddability into l1 • Dloc : distortion for embedding any subset of size k • Dglob : distortion for embedding entire metric • What is the relationship between Dloc and Dglob ?
µ µ ¶ ¶ 2 ( = ) l l k µ ¶ ( = ) ( = ) k l k 1 2 o g o g n n ( ( ( ( ( ) ( = ( = = ) ) ) ) ) ) l l l k k k O O D D D o g n ~ o g n o o n g g n n D ( = ) l l k ± 1 l o g o g o g n Results
Lower bound: Roadmap • Constant degree expander • High global distortion • Subgraphs of expander are sparse • Sparse graphs embed well • Different metric on expander
Sparse graphs • G is sparse if every subgraph on k vertices has at most k edges • G is -path decomposable if • every 2-connected subgraph H contains a path of length • vertices of path have degree 2 in H • [ABLT]1+O(1/ ) sparse graph and girth () -path decomposable
1 L ¡ ( ) ¹ O 1 + e = L ( ) 1 1 ¡ ¡ ¹ Embedding sparse graphs • G: -path decomposable, L = /9, 1/L,(u,v) = 1-(1-)d(u,v)embeds into l1 with distortion 1+O(e-L) • Distribution on multicuts: • d(u,v) L, Pr(u,v separated) = 1-(1-)d(u,v) • d(u,v) > L, Pr(u,v separated) 1-(1-)L • Distortion
P1 P2 P3 L L L Distribution on multicuts • d(u,v) L, Pr(u,v separated) = 1-(1-)d(u,v) • d(u,v) > L, Pr(u,v separated) 1-(1-)L • Can be done for path of length 3L(endpoints separated with probability 1) • Cut edges independently with probability • Decisions for P1 and P3 not independent • By induction • G has a cut vertex • G has a path of length = 9L
u v Distribution on multicuts • G has cut vertex c • Sample multicuts independently in Si • Pr[u,v not separated] = Pr[u,c not separated] Pr[v,c not separated]= (1-)d(u,c) (1- )d(v,c) = (1- )d(u,v) c S1 S3 S2
Distribution on multicuts • G has a path of length = 9L • Divide path into 3 parts P1, P2, P3 • Sample multicuts independently in H,P1, P2, P3 P2 P1 P3 H
µ ¶ 1 O 1 + ( = ) l k o g n Expanders have sparse subgraphs • [ABLT]3-regular expander, girth (log n), every subset of size k is sparse • (log(n/k)) path decomposable
µ ¶ ( = ) l k o g n ( = ) l ± 1 o g Local versus global distortion • Every embedding of (X,) into l1 requires distortion • Every subset of X of size k embeds into l1with distortion 1+ • Expander from[ABLT] with new metric • (u,v) = 1-(1-)d(u,v)
Picking parameters • 3-regular expander • Subset X of size k • H: vertices within distance of X.|H| k.3 • Pick (log(n/k)), so that log(n/k.3) • H is path decomposable • Metric (u,v) = 1-(1-)d(u,v)=c.log(1/)/
1 L ¡ ( ) ¹ ± O 1 1 · + + e = L ( ) 1 1 ¡ ¡ ¹ Bounding local distortion • Subset X of size k • H: vertices within distance of X. • u,v X • dH(u,v) dG(u,v) • dH(u,v) = dG(u,v) if dG(u,v) • H is path decomposable • Embedding of H into l1 : • dH(u,v) L, ||(u)- (v)||1 = 1-(1-)d(u,v) • dH(u,v) > L, ||(u)- (v)||1 1-(1-)L • Embedding of (u,v) = 1-(1-)dG(u,v) • Distortion
µ ¶ µ ¶ ( = ) l k 1 o g n = ( = ) l ± 1 ¹ o g Global distortion • min distortion for embedding expander into l1 is(avg distance/length of edge) • Distortion
µ ¶ l o g n l l l k + o g o g n o g Isometric local embeddings • Every subset of size k embeds isometrically into l1 • Entire metric requires distortion • Modification of distortion 1+ distortion construction for =1/(k.log n)
Near-isometric to isometric • Metric space (X,) • M: ratio of largest to smallest distance • Every subset of (X,) size k embeds into l1 with distortion 1+1/(2kM) • : smallest distance • Metric ’(u,v) = (u,v) + • Every subset of (X,’) size k embeds isometrically into l1 • Original embedding + almost uniform metric
Upper bound • Every size k subset of (X,d) embeddable into l1 with distortion D (X,d) embeddable into l1 with distortion O(D.log(n/k)) • Sum of two embeddings • handle large and small distances separately
Upper bound: Overview • x X, m = n/k • Rx,m = distance of m closest point to x • Pick subset S of size k • Every x X within distance 2Rx,m of some point in S • First embedding:Distortion D embedding of S + random mapping of X to S • Second embedding: First log(n/k) scales of Bourgain’s embedding.
Rx,m Ry,m Hitting Set Construction • Subset S of size k, every x X within distance 2Rx,m of some point in S • U = {B(x, Rx,m) : x X } • Repeat • Pick ball of min radius in U • Delete balls that intersect chosen ball from U • S : centers of chosen balls • At least n/k balls deleted in each step |S| k • g : X Sd(x,g(x)) 2 Rx,m
( ) d x y ; ( ( ) ( ) ) ( ) f 6 f l P O · £ r x y o g m = R R + x m y m ; ; Randomized clustering • Random mapping f : X X • d(x,f(x)) Rx,m(always) • [CKR, FRT] • Pick R (0,1) • Pick random order of X • f(x) = min point in B(x, .Rx,m)
( ) d x y ; ( ( ) ( ) ) ( ) h 6 h l P O · £ r x y o g m = R R + x m y m ; ; • Random mapping f : X X • g : X S, |S|=k, d(x,g(x)) 2 Rx,m • h(x) = g(f(x)) • d(x,h(x)) 5 Rx,m(always) • E[d(h(x),h(y)] O(log m) d(x,y) • E[d(h(x),h(y)] d(x,y) – 5(Rx,m + Ry,m)
Embedding large scales • Every size k subset of (X,d) embeddable into lp with distortion D • Embedding : X lp • ||(x)- (y)||p D·O(log m)·d(x,y) • ||(x)- (y)||p d(x,y) – (7D+2)(Rx,m + Ry,m)
Bourgain’s embedding for small scales • metric space (X,d), any m • embedding : X lp • ||(x) - (y)||p O(log m)·d(x,y) • ||(x) - (y)||p min(d(x,y), Rx,m + Ry,m) • ||(x)- (y)||p D·O(log m)·d(x,y) • ||(x)- (y)||p d(x,y) – (7D+2)(Rx,m + Ry,m)
µ ¶ l o g n ( ( = ) ) l k O o g n l l l k + o g o g n o g Conclusion and Questions • Almost tight connections between local and global distortion of finite metrics • Every subset of size k isometrically embeddable into l1 versus