1.25k likes | 1.27k Views
This presentation discusses the HPC-MW infrastructure for scientific applications on HPC environments, including the development of an XML-based system for user-defined data structure and future works.
E N D
HPC Middleware (HPC-MW)Infrastructure for Scientific Applications on HPC EnvironmentsOverview and Recent Progress Kengo Nakajima, RIST. 3rd ACES WG Meeting, June 5th & 6th, 2003. Brisbane, QLD, Australia,
3rd ACES WG mtg. June 2003. My Talk • HPC-MW (35 min.) • Hashimoto/Matsu’ura Code on ES (5 min.)
3rd ACES WG mtg. June 2003. Table of Contents • Background • Basic Strategy • Development • On-Going Works/Collaborations • XML-based System for User-Defined Data Structure • Future Works • Examples
3rd ACES WG mtg. June 2003. Frontier Simulation Software for Industrial Science (FSIS)http://www.fsis.iis.u-tokyo.ac.jp • Part of "IT Project" in MEXT (Minitstry of Education, Culture, Sports, Science & Technology) • HQ at Institute of Industrial Science, Univ. Tokyo • 5 years, 12M USD/yr. (decreasing ...) • 7 internal projects, > 100 people involved. • Focused on • Industry/Public Use • Commercialization
3rd ACES WG mtg. June 2003. Frontier Simulation Software for Industrial Science (FSIS) (cont.)http://www.fsis.iis.u-tokyo.ac.jp • Quantum Chemistry • Quantum Molecular Interaction System • Nano-Scale Device Simulation • Fluid Dynamics Simulation • Structural Analysis • Problem Solving Environment (PSE) • High-Performance Computing Middleware (HPC-MW)
3rd ACES WG mtg. June 2003. Background • Various Types of HPC Platforms • Parallel Computers • PC Clusters • MPP with Distributed Memory • SMP Cluster (8-way, 16-way, 256-way): ASCI, Earth Simulator • Power, HP-RISC, Alpha/Itanium, Pentium, Vector PE • GRID Environment -> Various Resources • Parallel/Single PE Optimization is important !! • Portability under GRID environment • Machine-dependent optimization/tuning. • Everyone knows that ... but it's a big task especially for application experts, scientists.
3rd ACES WG mtg. June 2003. Background • Various Types of HPC Platforms • Parallel Computers • PC Clusters • MPP with Distributed Memory • SMP Cluster (8-way, 16-way, 256-way): ASCI, Earth Simulator • Power, HP-RISC, Alpha/Itanium, Pentium, Vector PE • GRID Environment -> Various Resources • Parallel/Single PE Optimization is important !! • Portability under GRID environment • Machine-dependent optimization/tuning. • Everyone knows that ... but it's a big task especially for application experts, scientists.
3rd ACES WG mtg. June 2003. Reordering for SMP Cluster with Vector PEs: ILU Factorization
3rd ACES WG mtg. June 2003. 3D Elastic SimulationProblem Size~GFLOPSEarth Simulator/SMP node (8 PEs) ●:PDJDS/CM-RCM,■:PCRS/CM-RCM,▲:Natural Ordering
3rd ACES WG mtg. June 2003. 3D Elastic SimulationProblem Size~GFLOPSIntel Xeon 2.8 GHz, 8 PEs ●:PDJDS/CM-RCM,■:PCRS/CM-RCM,▲:Natural Ordering
3rd ACES WG mtg. June 2003. Parallel Volume RenderingMHD Simulation of Outer Core
3rd ACES WG mtg. June 2003. Volume Rendering Moduleusing Voxels On PC Cluster On Earth Simulator -Hierarchical Background Voxels -Linked-List -Globally Fine Voxels -Static-Array
3rd ACES WG mtg. June 2003. Background (cont.) • Simulation methods such as FEM, FDM etc. have several typical processes for computation.
3rd ACES WG mtg. June 2003. "Parallel" FEM Procedure Pre-Processing Main Post-Processing Initial Grid Data Data Input/Output Post Proc. Partitioning Matrix Assemble Visualization Linear Solvers Domain Specific Algorithms/Models
3rd ACES WG mtg. June 2003. Applications Applications Middleware Big Gap Compilers, MPI etc. Compilers, MPI etc. Background (cont.) • Simulation methods such as FEM, FDM etc. have several typical processes for computation. • How about "hiding" these process from users by Middleware between applications and compilers ? • Development: efficient, reliable, portable, easy-to-maintain • accelerates advancement of the applications (= physics) • HPC-MW = Middleware close to "Application Layer"
3rd ACES WG mtg. June 2003. I/O Matrix Assemble Linear Solver Visualization Example of HPC MiddlewareSimulation Methods include Some Typical Processes FEM
3rd ACES WG mtg. June 2003. I/O Matrix Assemble MPP-C MPP-A Linear Solver Visualization MPP-B Example of HPC MiddlewareIndividual Process can be optimized for Various Types of MPP Architectures FEM
3rd ACES WG mtg. June 2003. Example of HPC MiddlewareLibrary-Type HPC-MW for Existing HW FEM code developed on PC
3rd ACES WG mtg. June 2003. I/O I/O I/O Matrix Assemble Matrix Assemble Matrix Assemble Linear Solver Linear Solver Linear Solver Vis. Vis. Vis. HPC-MW for Xeon Cluster HPC-MW for Earth Simulator HPC-MW for Hitachi SR8000 Example of HPC MiddlewareLibrary-Type HPC-MW for Existing HW FEM code developed on PC
3rd ACES WG mtg. June 2003. FEM code developed on PC I/F for I/O I/F for Mat.Ass. I/F for Solvers I/F for Vis. I/O I/O I/O Matrix Assemble Matrix Assemble Matrix Assemble Linear Solver Linear Solver Linear Solver Vis. Vis. Vis. HPC-MW for Earth Simulator HPC-MW for Hitachi SR8000 HPC-MW for Xeon Cluster Example of HPC MiddlewareLibrary-Type HPC-MW for Existing HW
3rd ACES WG mtg. June 2003. FEM code developed on PC I/F for I/O I/F for Mat.Ass. I/F for Solvers I/F for Vis. I/O I/O I/O Matrix Assemble Matrix Assemble Matrix Assemble Linear Solver Linear Solver Linear Solver Vis. Vis. Vis. HPC-MW for Earth Simulator HPC-MW for Hitachi SR8000 HPC-MW for Xeon Cluster Example of HPC MiddlewareParallel FEM Code Optimized for ES
3rd ACES WG mtg. June 2003. FEM code developed on PC I/F for I/O I/F for Mat.Ass. I/F for Solvers I/F for Vis. I/O I/O I/O Matrix Assemble Matrix Assemble Matrix Assemble Linear Solver Linear Solver Linear Solver Vis. Vis. Vis. HPC-MW for Earth Simulator HPC-MW for Hitachi SR8000 HPC-MW for Xeon Cluster Example of HPC MiddlewareParallel FEM Code Optimized for Intel Xeon
3rd ACES WG mtg. June 2003. FEM code developed on PC I/F for I/O I/F for Mat.Ass. I/F for Solvers I/F for Vis. I/O I/O I/O Matrix Assemble Matrix Assemble Matrix Assemble Linear Solver Linear Solver Linear Solver Vis. Vis. Vis. HPC-MW for Earth Simulator HPC-MW for Hitachi SR8000 HPC-MW for Xeon Cluster Example of HPC MiddlewareParallel FEM Code Optimized for SR8000
3rd ACES WG mtg. June 2003. • Background • Basic Strategy • Development • On-Going Works/Collaborations • XML-based System for User-Defined Data Structure • Future Works • Examples
3rd ACES WG mtg. June 2003. HPC Middleware(HPC-MW)? • Based on idea of "Plug-in" in GeoFEM.
3rd ACES WG mtg. June 2003. System Config. of GeoFEM http://geofem.tokyo.rist.or.jp/
3rd ACES WG mtg. June 2003. Local Data StructureNode-based Partitioninginternal nodes-elements-external nodes
3rd ACES WG mtg. June 2003. What can we do by HPC-MW ? • We can develop optimized/parallel code easily on HPC-MW from user's code developed on PC. • Library-Type • most fundamental approach • optimized library for individual architecture • Compiler-Type • Next Generation Architecture • Irregular Data • Network-Type • GRID Environment (heterogeneous) • Large-Scale Computing (Virtual Petaflops), Coupling.
LINK 3rd ACES WG mtg. June 2003. Initial Entire Mesh Data CNTL Data mesh generation partitioning Utility Software for Parallel Mesh Generation & Partitioning FEM Source Code by Users Patch File for Visualization FEM Codes Image File for Visualization HW Data Library-type HPC-MW Exec. File Parallel FEM CNTL Data FEM Distributed Local Mesh Data Distributed Local Results Distributed Local Mesh Data LINK Compiler-Type HPC-MW Generator 線形ソルバ ソースコード 線形ソルバ ソースコード 線形ソルバ ソースコード 線形ソルバ ソースコード GEN. 線形ソルバ ソースコード 線形ソルバ ソースコード 線形ソルバ ソースコード 線形ソルバ ソースコード Network-Type HPC-MW Source Code Compiler-Type HPC-MW Source Code Network-Type HPC-MW Generator HPC-MW Prototype Source FIle GEN. MPICH, MPICH-G Network HW Data HPC-MWProcedure
3rd ACES WG mtg. June 2003. FEM code developed on PC I/F for I/O I/F for Mat.Ass. I/F for Solvers I/F for Vis. I/O I/O I/O Matrix Assemble Matrix Assemble Matrix Assemble Linear Solver Linear Solver Linear Solver Vis. Vis. Vis. HPC-MW for Earth Simulator HPC-MW for Hitachi SR8000 HPC-MW for Xeon Cluster Library-Type HPC-MWParallel FEM Code Optimized for ES
3rd ACES WG mtg. June 2003. What can we do by HPC-MW ? • We can develop optimized/parallel code easily on HPC-MW from user's code developed on PC. • Library-Type • most fundamental approach • optimized library for individual architecture • Compiler-Type • Next Generation Architecture • Irregular Data • Network-Type • GRID Environment, • Large-Scale Computing (Virtual Petaflops), Coupling.
3rd ACES WG mtg. June 2003. I/O I/O Data for Analysis Model Matrix Assemble Matrix Assemble MPP-C Parameters of H/W MPP-A Linear Solver Linear Solver Visualization Visualization MPP-B Compiler-Type HPC-MW Optimized code is generated by special language/ compiler based on analysis data (cache blocking etc.) and H/W information. F E M Special Compiler
3rd ACES WG mtg. June 2003. What can we do by HPC-MW ? • We can develop optimized/parallel code easily on HPC-MW from user's code developed on PC. • Library-Type • most fundamental approach • optimized library for individual architecture • Compiler-Type • Next Generation Architecture • Irregular Data • Network-Type • GRID Environment (heterogeneous). • Large-Scale Computing (Virtual Petaflops), Coupling.
3rd ACES WG mtg. June 2003. analysis model space Network-Type HPC-MWHeterogeneous Environment"Virtual" Supercomputer F E M analysis model space
3rd ACES WG mtg. June 2003. What is new, What is nice ? • Application Oriented (limited to FEM at this stage) • Various types of capabilities for parallel FEM are supported. • NOT just a library • Optimized for Individual Hardware • Single Performance • Parallel Performance • Similar Projects • Cactus • Grid Lab (on Cactus)
3rd ACES WG mtg. June 2003. • Background • Basic Strategy • Development • On-Going Works/Collaborations • XML-based System for User-Defined Data Structure • Future Works • Examples
3rd ACES WG mtg. June 2003. Schedule FY.2002 FY.2003 FY.2004 FY.2006 FY.2005 Basic Design Prototype Library-type HPC-MW Scalar Vector Compiler-type Network-type HPC-MW Compiler Network FEM Codes on HPC-MW Public Release
3rd ACES WG mtg. June 2003. System for Development HW Vendors HW Info. HPC-MW Public Release Public Users Library-Type Compiler-Type Network-Type Feed-back, Comments I/O Vis. Solvers Coupler AMR DLB Mat.Ass. Feed-back Appl. Info. Infrastructure FEM Code on HPC-MW Solid Fluid Thermal
3rd ACES WG mtg. June 2003. FY. 2003 • Library-Type HPC-MW • FORTRAN90,C, PC-Cluster Version • Public Release • Sept. 2003: Prototype Released • March 2004: Full version for PC Cluster • Demonstration on Network-Type HPC-MW in SC2003, Phoenix, AZ, Nov. 2003. • Evaluation by FEM Code
3rd ACES WG mtg. June 2003. Library-Type HPC-MWMesh Generation is not considered • Parallel I/O : I/F for commercial code (NASTRAN etc.) • Adaptive Mesh Refinement (AMR) • Dynamic Load-Balancing using pMETIS (DLB) • Parallel Visualization • Linear Solvers(GeoFEM + AMG, SAI) • FEM Operations (Connectivity, Matrix Assembling) • Coupling I/F • Utility for Mesh Partitioning • On-line Tutorial
3rd ACES WG mtg. June 2003. AMR+DLB
3rd ACES WG mtg. June 2003. Parallel Visualization Tensor Field Vector Field Scalar Field Streamlines Hyperstreamlines Surface rendering Particle tracking Interval volume-fitting Volume rendering Topological map LIC Volume rendering Extension of functions Extension of dimensions Extension of Data Types
3rd ACES WG mtg. June 2003. PMR(Parallel Mesh Relocator) • Data Size is Potentially Very Large in Parallel Computing. • Handling entire mesh is impossible. • Parallel Mesh Generation and Visualization are difficult due to Requirement for Global Info. • Adaptive Mesh Refinement (AMR) and Grid Hierarchy.
3rd ACES WG mtg. June 2003. Parallel Mesh Generationusing AMR • Prepare Initial Mesh • with size as large as single PE can handle. Initial Mesh
3rd ACES WG mtg. June 2003. Parallel Mesh Generationusing AMR • Partition the Initial Mesh into Local Data. • potentially very coarse Partition Local Data Local Data Local Data Local Data Initial Mesh
3rd ACES WG mtg. June 2003. Parallel Mesh Generationusing AMR • Parallel Mesh Relocation (PMR) by Local Refinement on Each PEs. Partition PMR Local Data Local Data Local Data Local Data Initial Mesh
3rd ACES WG mtg. June 2003. Parallel Mesh Generationusing AMR • Parallel Mesh Relocation (PMR) by Local Refinement on Each PEs. Refine Partition PMR Local Data Local Data Refine Local Data Local Data Refine Initial Mesh Refine
3rd ACES WG mtg. June 2003. Parallel Mesh Generationusing AMR • Parallel Mesh Relocation (PMR) by Local Refinement on Each PEs. Refine Refine Partition PMR Local Data Local Data Refine Refine Local Data Local Data Refine Refine Initial Mesh Refine Refine
3rd ACES WG mtg. June 2003. Parallel Mesh Generationusing AMR • Hierarchical Refinement History • can be utilized for visualization & multigrid Refine Refine Partition PMR Local Data Local Data Refine Refine Local Data Local Data Refine Refine Initial Mesh Refine Refine
3rd ACES WG mtg. June 2003. Parallel Mesh Generationusing AMR • Hierarchical Refinement History • can be utilized for visualization & multigrid mapping results reversely from local fine mesh to initial coarse mesh Initial Mesh