1.19k likes | 1.21k Views
This article explores the concentration properties of matrix martingales and their application in solving Laplacian linear equations. It discusses Bernstein's and Freedman's inequalities, approximation between graphs, sparsification of Laplacian matrices, and Gaussian elimination on Laplacians.
E N D
Concentration of Scalar Random Variables Random , are independent Is with high probability?
Concentration of Scalar Random Variables Bernstein’s Inequality Random , are independent gives E.g. if and
Concentration of Scalar Martingales Random , are independent
Concentration of Scalar Martingales Random , are independent
Concentration of Scalar Martingales Freedman’s Inequality Bernstein’s Inequality Random , are independent gives
Concentration of Scalar Martingales Freedman’s Inequality Random , are independent gives
Concentration of Matrix Random Variables Matrix Bernstein’s Inequality (Tropp 11) Random symmetric are independent gives
Concentration of MatrixMartingales Matrix Freedman’s Inequality (Tropp 11) Random symmetric are independent gives E.g. if and
Concentration of MatrixMartingales Matrix Freedman’s Inequality (Tropp 11) Predictable quadratic variation
Laplacian Matrices Graph Edge weights matrix
Laplacian Matrices Graph Edge weights matrix
Laplacian Matrices weighted adjacency matrix of the graph diagonal matrix of weighted degrees Graph Edge weights
Laplacian Matrices Symmetric matrix All off-diagonals are non-positive and
Approximation between graphs PSD Order iff for all Matrix Approximation iff and Graphs iff Implies multiplicative approximation to cuts
Sparsification of Laplacians Every graph on vertices for any , has a sparsifier with edges [BSS’09,LS’17] Sampling edges independently by leverage scores w.h.p. gives sparsifier with edges [Spielman-Srivastava’08] Many applications!
Graph Semi-Streaming Sparsification edges of a graph arriving as stream GOAL: maintain -sparsifier, minimize working space (& time) [Ahn-Guha ’09] space edge cut-sparsifier Used scalar martingales
Independent Edge Samples Expectation Zero-mean w. probability o.w.
Matrix Martingale? Zero mean? So what if we just sparsify again? It’s a martingale! Every time we accumulate , sparsify down to , and continue
Independent Edge Samples Expectation Zero-mean w. probability o.w.
Repeated Edge Sampling Expectation Zero-mean w. prob. o.w.
Repeated Edge Sampling Expectation Zero-mean w. prob. o.w.
Repeated Edge Sampling Expectation Zero-mean Martingale w. prob. o.w.
Repeated Edge Sampling It works – and it couldn’t be simpler [K.PPS’17] space edge -sparsifier Shave a log off many algorithms that do repeated matrix sampling…
Approximation? and ?
Isotropic position Goal is now
Concentration of MatrixMartingales Matrix Freedman’s Inequality Random symmetric gives
Concentration of MatrixMartingales Matrix Freedman’s Inequality Random symmetric gives
Norm Control Can there be many edges with large norm? Can afford , , w. probability o.w.
Norm Control If we lose track of the original graph, we can’t estimate the norm Stopping time argument Bad regions
Variance Control But then edges get resampled. Geometric sequence for variance, roughly.
Resparsification Every time we accumulate , sparsify down to , and continue [K.PPS’17] space edge -sparsifier Shave a log off many algorithms that do repeated matrix sampling…
Laplacian Linear Equations [ST04]: solving Laplacian linear equations in time [KS16]: simple algorithm
Solving a Laplacian Linear Equation Gaussian Elimination Find , upper triangular matrix, s.t. Then Easy to apply and
Solving a Laplacian Linear Equation Approximate Gaussian Elimination Find , upper triangular matrix, s.t. is sparse. iterations to get -approximate solution .
Approximate Gaussian Elimination Theorem [KS] When is an Laplacian matrix with non-zeros, we can find in time an upper triangular matrix with non-zeros, s.t.w.h.p.
Additive View of Gaussian Elimination Find , upper triangular matrix, s.t
Additive View of Gaussian Elimination Find the rank-1 matrix that agrees with on the first row and column.
Additive View of Gaussian Elimination Subtract the rank 1 matrix. We have eliminated the first variable.
Additive View of Gaussian Elimination The remaining matrix is PSD.
Additive View of Gaussian Elimination Find rank-1 matrix that agrees with our matrix on the next row and column.
Additive View of Gaussian Elimination Subtract the rank 1 matrix. We have eliminated the second variable.
Additive View of Gaussian Elimination Repeat until all parts written as rank 1 terms.
Additive View of Gaussian Elimination Repeat until all parts written as rank 1 terms.
Additive View of Gaussian Elimination Repeat until all parts written as rank 1 terms.
Additive View of Gaussian Elimination What is special about Gaussian Elimination on Laplacians? The remaining matrix is always Laplacian.
Additive View of Gaussian Elimination What is special about Gaussian Elimination on Laplacians? The remaining matrix is always Laplacian.
Additive View of Gaussian Elimination What is special about Gaussian Elimination on Laplacians? The remaining matrix is always Laplacian. A new Laplacian!
Why is Gaussian Elimination Slow? Solvingby Gaussian Elimination can take time. The main issue is fill