330 likes | 515 Views
Sampling in Graphs. Alexandr Andoni (Microsoft Research). Graph compression. Why smaller graphs ? use less storage space faster algorithms e asier visualization. ?. Preserve some structure Cuts a pproximately Other properties:
E N D
Sampling in Graphs Alexandr Andoni (Microsoft Research)
Graph compression Why smaller graphs? • use less storage space • faster algorithms • easier visualization
? • Preserve some structure • Cuts • approximately • Other properties: • Distances, (multi-commodity) flows, effective resistances…
Plan 1) Cut sparsifiers 2) More efficient cut sparsifiers 3) Node sparsifiers
Cut sparsifiers • have nodes, edges; unweighted • For any set , • up to approximation
Approach? [Karger’94,’96]: • Sample edges! • Each edge kept with probability • Original: cut value • New: expected cut value • Set new edge weight = • Cut value => expected cut value
How to set ? • # of edges after sampling • Small => smaller sparsifier • can’t hope to go below edges (will disconnect!) • ideally • In fact, need • otherwise, can disconnect a vertex
Setting • Can we get away with ? • Issue: small cuts • Settle for: • , where • = is min-cut in the graph • = oversampling constant • any cut will retain at leastedges in expectation
Concentration • Usually, will have: • expectation: ok (by “correct” rescaling) • hard part: concentration • up to factor • Chernoff bound: • given r.v. • expectation • then
Applying Chernoff bound • Chernoff bound: • given r.v. • expectation • then • Take any cut, value= • Then: • = if sampled = 0 otherwise • Intuition: #edges kept = • enough to argue “high probability bound” • Pr[cut estimate not approximation] • Set to obtain probability < since
Enough? • We need to argue that all cuts are preserved… • there are cuts • Theorem [Karger]: the number of cuts of value is at most • E.g., at most cuts of min-size • Union bound: • cuts of size , each fails with • Cutsof size ? • tighter Chernoff: • Overall failure probability is
Smaller size? • , where • Useless if min-cut is small… • Sample different edges with different probabilities!
Non-uniform sampling [Benczur-Karger’96] • Theorem: • sample each edge with probability • re-weight sampled edge as • Then: cut sparsifier with • edges in total, with 99% probability • construction time: • Where = “strong connectivity” • inside the cliques • =1 for the bridge
Strong connectivity • Connectivity of edge : min-cut separating • -strongly connected component: min-cut is • Strong connectivity of : highest such that in -strong component • Unique partitioning into -strong components Connectivity: 5 Strong conn.: 2
Proof of theorem • i) Number of edges is small, in expectation • ii) Each cut approximately preserved i) Expected # edges is: • Fact: strong connectivity < => #edges • count edges by repeatedly cutting cuts of value • Take all edges in a -strong component , for max • Sum of for them is at most • Contract to a node, and repeat • Total sum: • Hence: expected # edges
ii) Cut values are approximated • Iteratively apply the “uniform sampling” argument • Partition edges by approx. strong-connectivity: • are edges with strong-conn in • Sampling: as if • first sample edges , keeping rest intact (Pr sample =1) • then sample • …
Iterative sampling • In iteration : • focus on cuts in • min-cut • OK to sample with • for => error • Total error • = • Use • Total size: • More attentive counting:
Comments • Works for weighted graphs too • Can compute strong connectivities in time • Can sample according to other measures: • connectivity, Nagamochi-Ibaraki index [FHHP’11] • random spanning trees [GRV’09, FHHP’11] • effective resistance [ST’04, SS’08] • distance spanners [KP’11] • All obtain edges (like coupon collector) • Can we do better? • Yes! • [BSS’09]: edges (deterministically) • But: construction time is • OPEN: size sparsifier in time ?
BREAK Improve dependence on ?
Improve dependence on ? • Sometimes can be a large factor, • Generally NO • Need size graph [Alon-Boppana, Alo’97, AKW’14] • But: YES ifwe relax the desiderata a bit…
Smaller relaxed cut sparsifiers [A-Krauthgamer-Woodruff’14]: • Relaxations: • A small sketch instead of small graph • Each cut preserved with high probability • instead of all cuts at the same time • Theorem: can construct relaxed cut sparsifier of size . • The sketch is also a sample of the graph, but “estimation” is different • small data structure that can report cut value whp.
Motivating example • Why same sampling does not work? • consider complete graph (or random) • edge sampled with • degree of each node = • vertex cuts: need for approximation • Alon-Boppana: essentially best possible (for regular graphs, for spectral approximation) • But, if interested in cut values only: • can store the degrees of nodes => space • for much larger cuts , sampling is enough
Proof of theorem • Three parts: • i) sketch description • ii) sketch size: • iii) estimation algorithm, correctness
i) Sketch description guess value of the unknown cut , up to factor 2: so, after sample: • 1) for , • down-sample edges with probability • graph for each possible guess • 2) for each : decompose along sparse cuts: • in a connected component , if there is a set s.t. • store cross-cut edges, • delete these edges from the graph, and repeat • 3) store: • degrees for each node • sample edges out of each node
ii) Sketch size space • For each of graphs , we store: • edges across sparse cuts • degrees of vertices • sample of edges out of all nodes • Claim: there are a total of edges across sparse cuts in each • for each cut along sparse cut, have to store edges • can assume • “charge” edges to nodes in => edges per node • if a node is charged, the size of its connected component drops by at least factor 2 => can be charged times! • Overall: ??? space space
iii) Estimation • Given a set , need to estimate • Suppose we guess up to factor 2: • will try different if turns out to be wrong • use to estimate the cut value up to • : times the sum of • # of sparse-cut edges crossing • for each connected component : let
Estimation illustration #edges incident to • : times the sum of • #sparse-cut edges crossing • for each connected : let estimate of #edges inside dense components
iii) Correctness of estimation • Each of 3 steps preserves cut value up to • 1) sample down to edges: • variance is , which is a approximation • (like in “uniform sampling”) • 2) edges crossing sparse cuts are preserved exactly! • 3) edges inside dense components… ? • intuition: there are fewer edges inside that edges leaving , hence smaller variance to estimate the latter!
Dense component • Estimate for component : • Claim: • otherwise would be a sparse cut, and would have been cut • but • since we “guessed” that => • hence • E.g.: => #edges inside is • this can be large: const fraction of • but only if average degree is • then sampling edges/node suffices!
Variance • Estimate for component : • Let
Concluding remarks • Done with the proof! • Can get “high probability” by repeating logarithmic number of times and taking median • Construction time? • requires computation of sparse cuts… NP-hard problem • OK to compute approximately! • approximation => sketch size • E.g., using [Madry’10] • runtime with size
Open questions • Final: small data structure that, for given cut, outputs cut value with high probability • 1) Can a graph achieve the same guarantee? • i.e., use estimate=“cut value in the sample graph” instead of estimate=“sum degrees - #edges inside” ? • 2) Applications where “w.h.p. per cut” enough? • E.g., good for sketching/streaming • Can compute min-cut from sketch: there are only of 2-approx. min-cuts => can query all of them • 3) Same guarantee for spectral sparsification?