310 likes | 512 Views
Parallel Solution of Navier Stokes Equations. Xing Cai Dept. of Informatics University of Oslo. Outline of the Talk. Two parallelization strategies based on domain decomposition at the linear algebra level Parallelization of Navier-Stokes Numerical experiments. Diffpack.
E N D
Parallel Solution of Navier Stokes Equations Xing Cai Dept. of Informatics University of Oslo
Outline of the Talk • Two parallelization strategies • based on domain decomposition • at the linear algebra level • Parallelization of Navier-Stokes • Numerical experiments
Diffpack • O-O software environment for scientific computation • Rich collection of PDE solution components - portable, flexible, extensible • www.diffpack.com • H.P.Langtangen: Computational Partial Differential Equations, Springer 1999
The Question Starting point: sequential PDE solvers How to do the parallelization? We need • a good parallelization strategy • a good and simple implementation of the strategy Resulting parallel solvers should have • good parallel efficiency • good overall numerical performance
Domain Decomposition • Solution of the original large problem through iteratively solving many smaller subproblems • Can be used as solution method or preconditioner • Flexibility -- localized treatment of irregular geometries, singularities etc • Very efficient numerical methods -- even on sequential computers • Suitable for coarse grained parallelization
Additive Schwarz Method • Subproblems can be solved in parallel • Subproblems are of the same form as the original large problem, with possibly different boundary conditions on artificial boundaries
Convergence of the Solution Single-phase groundwater flow
Observations • DD is a good parallelization strategy • The approach is not PDE-specific • A program for the original global problem can be reused (modulo B.C.) for each subdomain • Must communicate overlapping point values • No need for global data • Explicit temporal schemes are a special case where no iteration is needed (“exact DD”)
Goals for the Implementation • Reuse sequential solver as subdomain solver • Add DD management and communication as separate modules • Collect common operations in generic library modules • Flexibility and portability • Simplified parallelization process for the end-user
SubdomainSimulator SubdomainFEMSolver Generic Subdomain Simulators • SubdomainSimulator • abstract interface to all subdomain simulators, as seen by the Administrator • SubdomainFEMSolver • Special case of SubdomainSimulator for finite element-based simulators • These are generic classes, not restricted to specific application areas
SubdomainSimulator SubdomainFEMSolver Simulator SimulatorP Administrator Making the Simulator Parallel class SimulatorP : public SubdomainFEMSolver public Simulator { // … just a small amount of code virtual void createLocalMatrix () { Simulator::makeSystem (); } };
Summary So Far • A generic approach • Works if the DD algorithm works • Make use of class hierarchies • The new parallel-specific code, SimulatorP, is very small and simple to write
Application • Single-phase groundwater flow • DD as the global solution method • Subdomain solvers use CG+FFT • Fixed number of subdomains M=32 (independent of P) • Straightforward parallelization of an existing simulator P: number of processors
Linear-algebra-level Approach • Parallelize matrix/vector operations • inner-product of two vectors • matrix-vector product • preconditioning - block contribution from subgrids • Easy to use • access to all Diffpack v3.0 iterative methods, preconditioners and convergence monitors • “hidden” parallelization • need only to add a few lines of new code • arbitrary choice of number of procs at run-time • less flexibility than DD
Straightforward Parallelization • Develop a sequential simulator, without paying attention to parallelism • Follow the Diffpack coding standards • Need Diffpack add-on libraries for parallel computing • Add a few new statements for transformation to a parallel simulator
Library Tool • class GridPartAdm • Generate overlapping or non-overlapping subgrids • Prepare communication patterns • Update global values • matvec, innerProd, norm
A Simple Coding Example GridPartAdm* adm;//access to parallelizaion functionality LinEqAdm* lineq;//administrator for linear system & solver // ... #ifdef PARALLEL_CODE adm->scan (menu); adm->prepareSubgrids (); adm->prepareCommunication (); lineq->attachCommAdm (*adm); #endif // ... lineq->solve (); set subdomain list = DEFAULT set global grid = grid1.file set partition-algorithm = METIS set number of overlaps = 0
Single-phase Groundwater Flow Highly unstructured grid Discontinuity in the coefficientK (0.1 & 1)
Measurements 130,561 degrees of freedom Overlapping subgrids Global BiCGStab using (block) ILU prec.
Simulation Snapshots Pressure
Some CPU-Measurements The pressure equation is solved by the CG method
Combined Approach • Use a CG-like method as basic solver (i.e. use a parallelized Diffpack linear solver) • Use DD as preconditioner (i.e. SimulatorP is invoked as a preconditioner solve) • Combine with coarse grid correction • CG-like method + DD prec. is normally faster than DD as a basic solver
Two-phase Porous Media Flow SEQ: PEQ: BiCGStab + DD prec. for global pressure eq. Multigrid V-cycle in subdomain solves
Two-phase Porous Media Flow History of saturation for water and oil
Nonlinear Water Waves Fully nonlinear 3D water waves Primary unknowns: Parallelization based on an existing sequential Diffpack simulator
Nonlinear Water Waves • CG + DD prec. for global solver • Multigrid V-cycle as subdomain solver • Fixed number of subdomains M=16 (independent of P) • Subgrids from partition of a global 41x41x41 grid
Nonlinear Water Waves 3D Poisson equation in water wave simulation
Summary • Goal: provide software and programming rules for easy parallelization of sequential simulators • Two parallelization strategies: • domain decomposition: very flexible, compact visible code/algorithm • parallelization at the linear algebra level: “automatic” hidden parallelization • Performance: satisfactory speed-up