1 / 37

Spectrally Thin Trees

Spectrally Thin Trees. Nick Harvey University of British Columbia Joint work with Neil Olver (MIT  Vrije Universiteit ). TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A. Approximating Dense Objects by Sparse Objects. Floor joists. Wood Joists.

cid
Download Presentation

Spectrally Thin Trees

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Spectrally Thin Trees Nick Harvey University of British Columbia Joint work with Neil Olver (MITVrijeUniversiteit) TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA

  2. Approximating Dense Objectsby Sparse Objects Floor joists Wood Joists Engineered Joists

  3. Approximating Dense Objectsby Sparse Objects Bridges Masonry Arch Truss Arch

  4. Approximating Dense Objectsby Sparse Objects Bones Human Femur RobinBone

  5. Approximating Dense Objectsby Sparse Objects Graphs Dense Graph Sparse Graph How well can any graph be approximated by a sparse graph?

  6. First way to compare graphs Do graphs have nearly same weight on corresponding cuts? S S

  7. Second way to compare graphs Do their Laplacian matrices have nearly same eigensystem?

  8. First way, more formally • Cut±(S)={edgest:s2S,tS} Weight of cut: u(±(S)) w(±(S)) Edge weights u Edge weights w S S • ®-cut sparsifier:u(±(S))·w(±(S))·®¢u(±(S))8S

  9. Second way, more formally 5 10 Graph with weightsu: d c a 1 2 b c d b a Laplacian Matrix: a negative of u(ac) b Lu = D-A = weighted degree of node c c d

  10. Second way, more formally Edge weights u Edge weights w Def:A¹B,B-Ais PSD,xTAx·xTBx8x2Rn ®-spectral sparsifier:Lu¹Lw¹®¢Lu Lu= Lw=

  11. Thin trees Edge weights u Edge weights w Let w be supported on a spanning tree ®-thin tree: w(±(S))·®¢u(±(S)) 8S ®-spectrally thin tree:Lw¹®¢Lu S S

  12. Connectivity and Conductance s t Connectivity:kst = min { u(±(S)) : s2S, tS } Global connectivity:K = min { ke : e2E } Effective Resistance fromstot: voltage difference when a 1-amp current source placed between s and t Effective Conductance: cst = 1 / (effective resistance fromstot) Global conductance:C = min {ce: e2E } Fact:cst·kst8s,t. Example:cst=1/n but kst=1. Long paths affect conductance but not connectivity Various Kappas: κκκκκκκκκκκκκκ

  13. Motivation for thin trees Edge weights u Unweighted Goddyn’s Conjecture: every graph has a O(1/K)-thin tree O(1)-approximation for asymmetric TSPJaeger’s conjecture on nowhere-zero 3-flows [solved]Goddyn-Seymour conjecture on nowhere-zero 2+² flows Spectrally thin trees may be a useful step towards thin trees

  14. Intriguing Phenomenon cut-sparsifier result involving connectivities holds seemingly if and only if spectral-sparsifier result involving conductances holds

  15. Uniform sampling Assume unweighted Recall K=min{ke:e2E} Karger Skeletons: Define p = O(²-2log(n)/K) Sample every edge e with probability p Give every sampled edge e weight 1/p Resulting graph is a (1+²)-cut sparsifier,and number of edges shrinks by factor O(p), whp. Spectral version: [unpublished]Replace K by C and “cut” by “spectral” andC=min{ce:e2E} C spectral

  16. Uniform sampling Cut sparsifier,connectivityweights Karger Spectral sparsifier,conductanceweights Unpublished

  17. Non-uniform sampling Let ke be “strong connectivity” of edge e Benczur-Karger: Define pe = O(²-2log(n) /ke) Sample every edge e with probabilitype Give every sampled edge e weight 1/pe Resulting graph is a (1+²)-cut sparsifier andnumber of sampled edges is O(nlog(n)²-2), whp. Fung-Hariharan-Harvey-Panigrahi:Replace ke by ke and log(n) by log2(n). * log2(n) Open Question Improve to log(n) ke * log2(n) *

  18. Non-uniform sampling Let ke be “strong connectivity” of edge e Benczur-Karger: Define pe = O(²-2log(n) /ke) Sample every edge e with probabilitype Give every sampled edge e weight 1/pe Resulting graph is a (1+²)-cut sparsifier andnumber of sampled edges is O(nlog(n)²-2), whp. Spielman-Srivastava:Replace ke by ce and “cut” by “spectral”. * ce * spectral sparsifier *

  19. Uniform sampling Non-uniform sampling Benczur-Karger Cut sparsifier,connectivityweights Fung- Hariharan- Harvey-Panigrahi Karger Spectral sparsifier,conductanceweights Spielman-Srivastava Unpublished

  20. Thin trees Asadpour et al: Pick special distribution on spanning trees such thatevery edge e has Pr[ e in tree ] = £(1/K) Give every edge e in tree weight K Resulting tree is an -cut thin tree Maximum entropy distribution works Chekuri et al: Pipage rounding also works Harvey-Olver: Replace K by ce and “cut” by “spectral” ce ce spectrally thin

  21. Uniform sampling Non-uniform sampling O(logn/loglogn) thin trees Benczur-Karger Asadpouret al. Cut sparsifier,connectivityweights Fung- Hariharan- Harvey-Panigrahi Karger Chekuri-Vondrak-Zenklusen Spectral sparsifier,conductanceweights Spielman-Srivastava Harvey-Olver Unpublished

  22. Linear-size sparsifiers Batson-Spielman-Srivastava:Can efficiently construct a (1+²)-spectral sparsifierwith O(n²-2) edges such that “on average”weight of each edge e is £(²2ce) Marcus-Spielman-Srivastava:Remove “on average”, but not efficient. Open question:Replace ce by ke and “spectral” by “cut”? cut? ke?

  23. Uniform sampling Non-uniform sampling O(logn/loglogn) thin trees Linear-sizeSparsifiers Benczur-Karger Asadpouret al. Cut sparsifier,connectivityweights ? Fung- Hariharan- Harvey-Panigrahi Karger Chekuri-Vondrak-Zenklusen Batson-Spielman-Srivastava Spectral sparsifier,conductanceweights Spielman-Srivastava Harvey-Olver Unpublished Marcus-Spielman-Srivastava

  24. Optimal thin trees weights w weights u tree T Suppose we have a (1+²)-spectral sparsifier such thatweight of every edge is we=£(²2ce) Any spanning tree T (with weights w) is (1+²)-spectrally thin Or, unweighted tree T is O(1/C)-spectrally thin The same argument works if we replace ce by keand “spectrally thin” by “cut thin”. cut ke cut K cut

  25. Uniform sampling Non-uniform sampling O(logn/loglogn) thin trees Linear-sizeSparsifiers O(1) thin trees Benczur-Karger Asadpouret al. Cut sparsifier,connectivityweights ? ? Fung- Hariharan- Harvey-Panigrahi Karger Chekuri-Vondrak-Zenklusen Batson-Spielman-Srivastava Spectral sparsifier,conductanceweights Spielman-Srivastava Harvey-Olver Corollary of MSS Unpublished Marcus-Spielman-Srivastava

  26. Spectrally Thin Trees Given a graph G with eff. conductances¸C. Find an unweighted spanning subtreeT with Easy lower bound:®¸1.5. Easy upper bound:® = O(logn), algorithmic (even deterministic). Main Theorem:® = , algorithmic (even deterministic). Theorem [MSS]:® = O(1), existential result only.

  27. Spectrally Thin Trees Given an (unweighted) graph G with eff. conductances¸C. Can find an unweighted tree T with • Proof overview: • Show independent sampling gives spectral thinness, but not a tree. • ►Sample every edge e independently with prob. xe=1/ce • Show dependent sampling gives a tree, and spectral thinness still works.

  28. Matrix Concentration • Given any random nxn, symmetric matrices Y1,…,Ym.Is there an analog of Chernoff bound showing that iYiis probably “close” to E[iYi]? Theorem: [Tropp ‘12]Let Y1,…,Ym beindependent, PSD matrices of size nxn.Let Y=iYi and Z=E[Y]. Suppose Yi¹R¢Z a.s. Then

  29. Independent sampling Define sampling probabilities xe=1/ce. It is known that exe=n–1. Claim:Independent sampling gives TµE with E[|T|]=n–1and Theorem [Tropp ‘12]:Let M1,…,Mm be nxn PSD matrices. Let D(x) be a product distribution on {0,1}m with marginalsx. Let Suppose Mi¹Z. Then Define Me=ce¢Le. Then Z=LG and Me¹Z holds. Setting ®=6logn/loglogn, we get whp. But T is not a tree! Laplacianofthesingleedgee Properties of conductances used

  30. Spectrally Thin Trees Given an (unweighted) graph G with eff. conductances¸C. Can find an unweighted tree T with • Proof overview: • Show independent sampling gives spectral thinness, but not a tree. • ►Sample every edge e independently with prob. xe=1/ce • Show dependent sampling gives a tree, and spectral thinness still works. • ►Run pipage rounding to get tree T with Pr[e2T]=xe=1/ce

  31. Pipage rounding [Ageev-Svirideno ‘04, Srinivasan ‘01, Calinescu et al. ‘07, Chekuri et al. ‘09] LetP be any matroidpolytope.E.g., convex hull of characteristic vectors of spanning trees. Given fractional x Find coordinates a and bs.t. linezx+z(ea–eb) stays in current face Find two points where line leaves P Randomly choose one of thosepoints s.t. expectation is x Repeat until x=ÂT is integral x is a martingale: expectation of final ÂT is original fractional x. ÂT1 ÂT2 ÂT6 x ÂT3 ÂT5 ÂT4

  32. Pipage rounding and concavity Sayf : Rm!R is concave under swapsifz!f(x+z(ea-eb)) is concave 8x2P, 8a,b2[m]. Let X0 be initial point and ÂT be final point visited by pipage rounding. Claim: If f concave under swaps then E[f(ÂT)]·f(X0). [Jensen] LetEµ{0,1}m be an event. Let g : [0,1]m!R be a pessimistic estimator for E, i.e., Claim: Suppose g is concave under swaps. Then Pr[ÂT2E]·g(X0). (e.g. f is multilinear extension of a supermodular function)

  33. Chernoff Bound • Chernoff Bound: Fix anyw,x2[0,1]m and let ¹=wTx. • Define . Then, • Claim:gt,µ is concave under swaps. [Elementary calculus] • Let X0be initial point and ÂTbe final point visited by pipage rounding. • Let ¹=wTX0. Then • Bound achieved by independent sampling also achieved by pipage rounding

  34. Matrix Pessimistic Estimators Theorem [Tropp ‘12]:Let M1,…,Mm be nxn PSD matrices. Let D(x) be a product distribution on {0,1}m with marginalsx. Let Suppose Mi¹Z. Let Then and . Pessimistic estimator • Main Theorem:gt,µ is concave under swaps. • Bound achieved by independent sampling also achieved by pipage rounding

  35. Spectrally Thin Trees Given an (unweighted) graph G with eff. conductances¸C. Can find an unweighted tree T with • Proof overview: • Show independent sampling gives spectral thinness, but not a tree. • ►Sample every edge e independently with prob. xe=1/ce • Show dependent sampling gives a tree, and spectral thinness still works. • ►Run pipage rounding to get tree T with Pr[e2T]=xe=1/ce

  36. Matrix Analysis Matrix concentration inequalities are usually proven via sophisticated inequalities in matrix analysis Rudelson: non-commutative Khinchine inequality Ahlswede-Winter: Golden-Thompson inequalityif A, B symmetric, then tr(eA+B) ·tr(eAeB). Tropp: Lieb’s concavity inequality [1973]if A, B symmetric and C is PD, thenz!trexp(A+log(C+zB)) is concave. Key technical result: new variant of Lieb’s theoremif A symmetric, B1, B2 are PSD, and C1, C2 are PD, thenz!trexp(A+log(C1+zB1)+log(C2–zB2)) is concave.

  37. Questions O(1/C)-spectrally thin trees exist. Is there an algorithm? Does sampling by edge connectivities give a cut sparsifierwith O(nlogn) edges? Do O(1/K)-cut thin trees exist? What about if we consider only the min cuts? Do cut-sparsifiers with O(n²-2) edges exist for whichevery edge e has weight £(²2ke)?

More Related