260 likes | 445 Views
Parallel Solution of the Poisson Problem Using MPI. Jason W. DeGraw Dept. of Mechanical and Nuclear Engineering The Pennsylvania State University. Introduction. The Poisson equation arises frequently in many areas of mechanical engineering:
E N D
Parallel Solution of the Poisson Problem Using MPI Jason W. DeGraw Dept. of Mechanical and Nuclear Engineering The Pennsylvania State University
Introduction • The Poisson equation arises frequently in many areas of mechanical engineering: • Heat transfer (Heat conduction with internal heat generation) • Solid mechanics (Beam torsion) • Fluid mechanics (Potential flow) • There are several reasons to use the Poisson equation as a test problem: • It is (relatively) easy to solve • Many different exact solutions are a available for comparison
y x A Basic Poisson Problem Our first case is the simple two dimensional Poisson equation on a square: (-1,1) (1,1) (-1,-1) (1,-1)
Finite Difference Approach • The standard second order difference equation is: • This formulation leads to a system of equations that is • symmetric • banded • sparse
Solving the System • The system may be solved in many ways: • Direct methods (Gaussian elimination) • Simple iterative methods (Jacobi, Gauss-Seidel, etc.) • Advanced iterative methods (CG, GMRES) • We will only consider iterative methods here, as we would like to test ways to solve big problems. We will use • Jacobi • Gauss-Seidel • Conjugate gradient
Jacobi and Gauss-Seidel • The Jacobi and Gauss-Seidel iterative methods are easy to understand and easy to implement. • Some advantages: • No explicit storage of the matrix is required • The methods are fairly robust and reliable • Some disadvantages • Really slow (Gauss-Seidel) • REALLY slow (Jacobi) • Relaxation methods (which are related) are usually better, but can be less reliable.
The Conjugate Gradient Method • CG is a much more powerful way to solve the problem. • Some advantages: • Easy to program (compared to other advanced methods) • Fast (theoretical convergence in N steps for an N by N system) • Some disadvantages: • Explicit representation of the matrix is probably necessary • Applies only to SPD matrices (not an issue here) • Note: In this particular case, preconditioning is not an issue. In more general problems, preconditioning must be addressed.
CG (cont’d) • As discussed in class, the CG algorithm requires two types of matrix-vector operations: • Vector dot product • Matrix-vector product • The matrix-vector product is more expensive, so we must be careful how we implement it. • Also keep in mind that both operations will require communication.
Matrix Storage • We could try and take advantage of the banded nature of the system, but a more general solution is the adoption of a sparse matrix storage strategy. • Complicated structures are possible, but we will use a very simple one:
Matrix Storage (Cont’d) • The previous example is pretty trivial, but consider the following: • An N by N grid leads to an N2 by N2 system • Each row in the system has no more than 5 nonzero entries • Example: N=64 • Full storage: 644=16777216 • Sparse storage: 642*5*3=61440 • “Sparseness” ratio (sparse / full): 0.00366 • For large N, storing the full matrix is incredibly wasteful. • This form is easy to use in a matrix-vector multiply routine.
Sparse Matrix Multiplication • The obligatory chunk of FORTRAN 77 code: c Compute Ax=y do i=1,M y(i)=0.0 do k=1,n(i) y(i)=y(i)+A(i,j(k))*x(j(k)) enddo enddo
Limitations of Finite Differences • Unfortunately, it is not easy to use finite differences in complex geometries. • While it is possible to formulate curvilinear finite difference methods, the resulting equations are usually pretty nasty. • The simplicity that makes finite differences attractive in simple geometries disappears in more complicated geometries.
Finite Element Methods • The finite element method, while more complicated than finite difference methods, easily extends to complex geometries. • A simple (and short) description of the finite element method is not easy to give. Here goes anyway: Weak Form Matrix System PDE
Finite Element Methods (cont’d) • The following steps usually take place • Load mesh and boundary conditions • Compute element stiffness matrices • Assemble global stiffness matrix and right-hand side • Solve system of equations • Output results • Where can we improve things? • Steps 1 and 5 are somewhat fixed • Step 4 – use parallel solution techniques • Avoid step 3 entirely – use an element by element solution technique
An Element by Element Algorithm • The assembly process can be considered as a summation: • Then: • We can compute each element stiffness matrix and store it. Computing the matrix-vector product is then done via the above element-by-element (EBE) formula.
A Potential Flow Problem • A simple test problem for which there is an “exact” solution is potential flow over a wavy wall
A Potential Flow Problem (cont’d) • The exact solution is • This problem is easily solved with finite element methods • The computed solution at right is accurate to within 1%
Conclusions • CG is a “better” iterative method than Jacobi or Gauss-Seidel. • Finite element methods, while more complicated than finite difference methods, are more geometrically flexible. • Finite element methods are generally more difficult to implement and the setup process is particularly expensive. • The EBE technique reduced the communication ratio, but took longer to actually produce results.
Conclusions (cont’d) • Clearly, communication issues are important when using CG-type methods. • Implementing a fully parallel finite element scheme is quite difficult. Some issues that were not addressed here: • Distributed assembly • Better mesh distribution schemes (I.e. fewer broadcasts) • Good results were obtained using both FEM and finite difference methods.