1 / 15

Distributed Computation of a Sparse Matrix Vector Product

Distributed Computation of a Sparse Matrix Vector Product. Lecture 18 MA/CS 471 Fall 2003. So Far. We have so far discussed: We discovered (by constructing the iteration for the error vector) that the convergence rate was determined by the spectral radius of:

beau
Download Presentation

Distributed Computation of a Sparse Matrix Vector Product

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Computation of a Sparse Matrix Vector Product Lecture 18 MA/CS 471 Fall 2003

  2. So Far • We have so far discussed: • We discovered (by constructing the iteration for the error vector) that the convergence rate was determined by the spectral radius of: • The smaller the spectral radius of this matrix the faster the convergence rate..

  3. Parallel Implementation • A further consideration over which scheme to use is the relative parallel efficiency. • We noted by demonstration that Gauss-Seidel appears to be much less “parallel” than Jacobi iteration..

  4. Non-Stationnary Methods • There are many alternative approaches to solving • We will consider a set of Krylov sub-space methods. • Specifically the popular conjugate gradient method for symmetric, positive definite A.

  5. Basic Idea • One can view the solution of Ax=b as a minimization problem. • We define the quadratic form: • We will show that if A is positive-definite and symmetric then the solution of Ax=b minmizes f See: http://www-2.cs.cmu.edu/~jrs/jrspapers.html#cg

  6. Minimizer of f • In order to find minima of the quadratic form function f we can take a gradient with respect to x • If A is symmetric then: Do demonstration on board.

  7. cont • So if A is symmetric then at a minimum of f • i.e. the system we wish to solve. • Thus the solution to Ax=b is a critical point of f • Next we show that if A is positive-definite then this critical point is a global minimum..

  8. Global Minimum • We consider the difference between f at the solution x and any other vector p:

  9. i.e. f has a global minimum at the solution to Ax=b

  10. Summary So Far • If A is symmetric, positive definite then • Has a global minimum at the solution to Ax=b • Approach: use a minimization algorithm to iterate towards the global minimizer.

  11. Conjugate-Gradient • This algorithm is suitable for solving Ax=b where A is symmetric and positive-definite.

  12. Conjugate-Gradient Algorithm • Sequence: See: http://www-2.cs.cmu.edu/~jrs/jrspapers.html#cg

  13. Parallel Implementation • There are a few bottlenecks • Any dot product will requirea global all reduce. • The norm of the residual requires a global all-reduce.

  14. Rate of Convergence • Definition: • The error equation (after a lot of work) now looks like: • i.e. in the problem induced A norm, the error will decay by a factor at each iteration which is determined by a function of the spectral condition number (kappa)

  15. cont • So we can note that the larger the spectral condition of the matrix A the more iterations it can take

More Related