680 likes | 688 Views
Explore the concentration of scalar random variables using martingales, Bernstein's inequality, and Freedman's inequality. Also consider the concentration of matrix random variables and Laplacian matrices in randomized numerical linear algebra.
E N D
Concentration of Scalar Random Variables Random , are independent Is with high probability?
Concentration of Scalar Random Variables Bernstein’s Inequality Random , are independent gives E.g. if and
Concentration of Scalar Martingales Random , are independent
Concentration of Scalar Martingales Random , are independent
Concentration of Scalar Martingales Freedman’s Inequality Bernstein’s Inequality Random , are independent gives
Concentration of Scalar Martingales Freedman’s Inequality Random , are independent gives
Concentration of Matrix Random Variables Matrix Bernstein’s Inequality (Tropp 11) Random symmetric are independent gives
Concentration of MatrixMartingales Matrix Freedman’s Inequality (Tropp 11) Random symmetric are independent gives E.g. if and
Concentration of MatrixMartingales Matrix Freedman’s Inequality (Tropp 11) Predictable quadratic variation
Laplacian Matrices Graph Edge weights matrix
Laplacian Matrices Graph Edge weights matrix
Laplacian Matrices weighted adjacency matrix of the graph diagonal matrix of weighted degrees Graph Edge weights
Laplacian Matrices Symmetric matrix All off-diagonals are non-positive and
Laplacian Matrices [ST04]: solving Laplacian linear equations in time [KS16]: simple algorithm
Solving a Laplacian Linear Equation Gaussian Elimination Find , upper triangular matrix, s.t. Then Easy to apply and
Solving a Laplacian Linear Equation Approximate Gaussian Elimination Find , upper triangular matrix, s.t. is sparse. iterations to get -approximate solution .
Approximate Gaussian Elimination Theorem [KS] When is an Laplacian matrix with non-zeros, we can find in time an upper triangular matrix with non-zeros, s.t.w.h.p.
Additive View of Gaussian Elimination Find , upper triangular matrix, s.t
Additive View of Gaussian Elimination Find the rank-1 matrix that agrees with on the first row and column.
Additive View of Gaussian Elimination Subtract the rank 1 matrix. We have eliminated the first variable.
Additive View of Gaussian Elimination The remaining matrix is PSD.
Additive View of Gaussian Elimination Find rank-1 matrix that agrees with our matrix on the next row and column.
Additive View of Gaussian Elimination Subtract the rank 1 matrix. We have eliminated the second variable.
Additive View of Gaussian Elimination Repeat until all parts written as rank 1 terms.
Additive View of Gaussian Elimination Repeat until all parts written as rank 1 terms.
Additive View of Gaussian Elimination Repeat until all parts written as rank 1 terms.
Additive View of Gaussian Elimination What is special about Gaussian Elimination on Laplacians? The remaining matrix is always Laplacian.
Additive View of Gaussian Elimination What is special about Gaussian Elimination on Laplacians? The remaining matrix is always Laplacian.
Additive View of Gaussian Elimination What is special about Gaussian Elimination on Laplacians? The remaining matrix is always Laplacian. A new Laplacian!
Why is Gaussian Elimination Slow? Solvingby Gaussian Elimination can take time. The main issue is fill
Why is Gaussian Elimination Slow? Solvingby Gaussian Elimination can take time. The main issue is fill
Why is Gaussian Elimination Slow? Solvingby Gaussian Elimination can take time. The main issue is fill New Laplacian Elimination creates a clique on the neighbors of
Why is Gaussian Elimination Slow? Solvingby Gaussian Elimination can take time. The main issue is fill New Laplacian Laplacian cliques can be sparsified!
Gaussian Elimination Pick a vertex to eliminate Add the clique created by eliminating Repeat until done
Approximate Gaussian Elimination Pick a vertex to eliminate Add the clique created by eliminating Repeat until done
Approximate Gaussian Elimination Pick a random vertex to eliminate Add the clique created by eliminating Repeat until done
Approximate Gaussian Elimination Pick a random vertex to eliminate Sample the clique created by eliminating Repeat until done Resembles randomized Incomplete Cholesky
Approximating Matrices by Sampling Goal Approach Show concentrated around expectation Gives w. high probability
Approximating Matrices in Expectation Consider eliminating the first variable Remaining graph + clique Rank 1 term Original Laplacian
Approximating Matrices in Expectation Consider eliminating the first variable Remaining graph + sparsifiedclique Rank 1 term
Approximating Matrices in Expectation Consider eliminating the first variable Remaining graph + sparsifiedclique Rank 1 term
Approximating Matrices in Expectation Consider eliminating the first variable Remaining graph + sparsified clique Rank 1 term
Approximating Matrices in Expectation Consider eliminating the first variable Remaining graph + sparsified clique Rank 1 term Suppose
Approximating Matrices in Expectation Consider eliminating the first variable Remaining graph + sparsified clique Rank 1 term Then
Approximating Matrices in Expectation Let be our approximation after eliminations If we ensure at each step Then Sparsified clique Clique
Approximation? Approximate Gaussian Elimination Find , upper triangular matrix, s.t.
Essential Tools Isotropic position Goal is now PSD Order iff for all
Matrix Concentration: Edge Variables Zero-mean variables Isotropic position variables w. probability o.w.