380 likes | 491 Views
Computational Support for Parallel/Distributed AMR. Manish Parashar The Applied Software Systems Laboratory ECE/CAIP, Rutgers University www.caip.rutgers.edu/~parashar/TASSL. Roadmap. Introduction to Berger-Oliger AMR Hierarchical Linked Lists (L. Wild) Overview of the GrACE Infrastructure
E N D
Computational Support for Parallel/Distributed AMR Manish Parashar The Applied Software Systems Laboratory ECE/CAIP, Rutgers University www.caip.rutgers.edu/~parashar/TASSL
Roadmap • Introduction to Berger-Oliger AMR • Hierarchical Linked Lists (L. Wild) • Overview of the GrACE Infrastructure • GrACE Programming Model and API • GrACE Design & Implementation • Current Research & Future Direction Manish Parashar
Cactus and GrACE • Cactus + GrACE • Transparent access to AMR via Cactus • GrACE Infrastructure Thorn • AMR Driver Thorn • Status • Unigrid driver in place • AMR driver under development Manish Parashar
The AMR Concept • Problem: How to maximize the solution accuracy for a given problem size with limited computational resources ? • Solution: Use dynamically adaptive grids (instead of uniform grids) where the grid resolution is defined locally based on application features and solution quality. • Method: Adaptive Mesh Refinement (AMR) Manish Parashar
Adaptively Griding the Application Domain Marsha Berger et al. (http://cs.nyu.edu/faculty/berger/) Manish Parashar
Adaptive Grid Structure Manish Parashar
Berger-Oliger AMR: Algorithm • Define adaptive grid structure • Define grid functions • Initialize grid functions • Repeat NumTimeSteps • if (RegridTime) Regrid at Level • Integrate at Level • if (Level+1 exists) Integrate at Level+1 Update Level from Level+1 • End Repeat Manish Parashar
Berger-Oliger AMR: Grid Hierarchy Manish Parashar
HLL • AMR system devised by Lee Wild in 1996 • Grid points split into nodes of size refinement-factor in each direction • Refine on nodes • Avoids clustering problems needed by box based AMR schemes Manish Parashar
Status of HLL • Lee wrote a shared memory version which was tested on various problems and showed excellent scaling properties. • It is currently being re-implemented as a standalone library with shared memory and MPI parallelism. This library will be used by a Cactus thorn to provide an AMR driver layer. Manish Parashar
GrACE: An Overview Manish Parashar
Programming Interface • Coarse grained SPMD data parallelism • C++ driver • declares and defines computational domain and application variables in terms of GrACE programming abstractions • defines overall structure of the AMR algorithms • FORTRAN/FORTRAN 90/C computational kernels • defined on regular arrays Manish Parashar
Programming Abstractions • Grid Hierarchy Abstraction • Template for the distributed adaptive grid hierarchy • Grid Function Abstraction • Application fields defined on the adaptive grid hierarchy • Grid Geometry Abstraction • High-level tools for addressing regions in the computational domain Manish Parashar
(ubx, uby) dy (lbx, lby) dx Grid Geometry Abstractions • Coords • rank, x, y, z, ... • BBox • lb, ub, stride • BBoxList • Operations • union, intersection, cluster, refine/coarsen, difference, ... Manish Parashar
GridHierarchy Abstraction • Attributes: • number of dimensions • maximum number of levels • specification of the computational domain • distribution type • refinement factor • boundary type/width GridHierarchy GH(Dim,GridType,MaxLevs) Manish Parashar
GridFunction Abstraction • Attributes: • dimension and type • vector? • spatial/temporal stencils • associated GridHierarchy • prolongation/restriction functions • “shadow” specification GridFunction(DIM)<T> GF(“gf”, Stencils,GH,…) • alignments • ghost cells • boundary types/updates • interaction types • flux registers? • parent storage? Manish Parashar
GridFunction Operations • GridFunction storage for a particular time, level, and component (and hierarchy) is managed as a Fortran 90 array object. GF(t, l, c, Main/Shadow) <op> Scalar GF(t, l, c, Main/Shadow) <op> GF2(….) RedOp(GF, t, l, Main/Shadow) • <op> : =,+=,-=,/+,*=,… • RepOp: Max, Min, Sum, Product, Norm,…. Manish Parashar
Ghost Communications Sync (GF, Time,Level,Main/Shadow) Sync (GF, Time, Level,Axis,Dir,Main/Shadow) Sync (GH, Time,Level,Main/Shadow) • Ghost region communications based on GridFunction stencil attribute at the specified grid level Manish Parashar
Region-based Communications Copy (GF, Time, Level, Reg1, Reg2, Main/Shadow) • Arbitrary copy (add, subtract) from Region1 to Region 2 the specified grid level. R1 R2 Manish Parashar
Data-parallel forall operator forall (gf, time, level, component) Call FORTRAN Subroutine…... end_forall • Parallel operation for all grid components at a particular time step and level. Manish Parashar
Refinement & Regriding • Encapsulates: • Generation of refined grids • Redistribution • Load-balancing • Data-transfers • Interaction schedules Refine(GH, Level, BBoxList) RecomposeHierarchy(GH) Manish Parashar
Prolongation/Restriction Functions • Set prolong/restrict function for each GridFunction foreachGF(GH, GF, DIM, GFType) SetProlongFunction(GF, Pfunc); SetRestrictFunction(GF, Rfunc); end_forallGF • Prolong/Restrict Prolong(GF, TimeFrom, LevelFrom, TimeTo, LevelTo, Region, …., Main/Shadow); Restrict(GF, TimeFrom, LevelFrom, TimeTo, LevelTo, Region, …., Main/Shadow); Manish Parashar
Checkpoint/Restart/Rollback • Checkpoint Checkpoint(GH,ChkPtFile); • Each GridFunction can be individually selected or deselected for checkpointing • Checkpoint files independent of # of processors • Restart ComposeHierarchy(GH,ChkPtFile); • Rollback RecomposeHierarchy(GH,ChkPtFile); Manish Parashar
IO Interface • Initialize IO ACEIOInit(); • Select IO Type ACEIOType(GH, IOType); • IOType := ACEIO_HDF, ACEIO_IEEEIO,.. • BEGIN_COMPUTE/END_COMPUTE mark region not executed by a dedicated IO node • Do IO Write(GF, Time, Level, Main, Double); • End IO ACEIOEnd(GH); Manish Parashar
Multigrid Interface • Determine the number of multigrid levels available MultiGridLevels(GH, Level, Main/Shadow); • Setup the multigrid hierarchy for a GridFunction SetUpMultiGrid(GF, Time, Level, MGLf, MGlc, Main/Shadow); SetUpMultiGrid(GF, Time, Level, Axis, MGlf, MGlc, Main/Shadow); • Do Multigrid GF(Time, Level, Comp, MGl, Main/Shadow)….; • Release multigrid hierachy ReleaseMultiGrid(GF, Time, Level, Main/Shadow); Manish Parashar
Software Engineering in the Small: Design Principles • Separation of Concerns • policy from mechanisms • data management from solution methods • storage semantics from addressing and access • computer science from computational science from engineering • Hierarchical Abstractions • application specific programming abstractions • semantically specialized DSM • distributed shared objects • hierarchical, extendible index space + distributed dynamic storage Manish Parashar
Application Application Components Programming Abstractions Dynamic Data-Management App. Objects HDDA Modules Kernels Grid Function Grid Structure Grid Geometry Grid Index Space Solver Cell Centered Main Hierarchy Region Mesh Storage Interpolator Vertex Centered Shadow Hierarchy Point Error Estimator Tree Access Face Centered Multigrid Hierarchy Clusterer Application Specific Method Specific Adaptive Data-Mgmt Separation of Concerns => Hierarchical Abstractions Manish Parashar
Hierarchical Distributed Dynamic Array (HDDA) • Distributed Array • Preserve array semantics over distribution • Reuse FORTRAN/C computational components • Communications are transparent • Automatic partitioning & load-balancing • Hierarchical array • Each element can be a HDDA • Dynamic Array • HDDA can grow and shrink dynamically • Efficient data-management for adaptivity Manish Parashar
HDDA Access Index Space Storage Expansion & Contraction Consistency Partitioning Communication Interaction Objects Data Objects Name Resolution Display Objects Separation of Concerns => Hierarchical Abstractions Manish Parashar
Application Locality Index Locality Storage Locality Distributed Dynamic Storage Manish Parashar
Partitioning Issues • Locality • Parallelism • Load-balance • Cost Manish Parashar
Composite Distribution • Inter-grid communications are local • Data and task parallelism exploited • Efficient load redistribution and clustering • Overhead of generating & maintaining composite structure Manish Parashar
IO & Visualization Manish Parashar
Integrated Visualization & IO • Grid Hierarchy • Views: Multi-level, multi-resolution grid structure and connectivity, hierarchical and composite grid/mesh views, …. • Commands: Refine, coarsen, re-distribute, read, write, checkpoint, rollback, …. • Grid Function • Views: Multi/single-resolution plots, feature extraction and reduced models, isosurfaces, streamlines, etc…. • Commands: Read, write, interpolate, checkpoint, rollback, …. • Grid Geometry • Views: Wire-frames with resolution and ownership information • Commands: Read, write, refine coarsen, merge, …. Manish Parashar