1 / 15

HPC Middleware on GRID … as a material for discussion of WG5

HPC Middleware on GRID … as a material for discussion of WG5. GeoFEM/RIST August 2nd, 2001, ACES/GEM at MHPCC Kihei, Maui, Hawaii. Background. Various Types of HPC Platforms MPP, VPP PC Clusters, Distributed Parallel MPPs, SMP Clusters 8-Way SMP, 16-Way SMP, 256-Way SMP

Download Presentation

HPC Middleware on GRID … as a material for discussion of WG5

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HPC Middleware on GRID… as a material for discussion of WG5 GeoFEM/RIST August 2nd, 2001, ACES/GEM at MHPCC Kihei, Maui, Hawaii

  2. Background • Various Types of HPC Platforms • MPP, VPP • PC Clusters, Distributed Parallel MPPs, SMP Clusters • 8-Way SMP, 16-Way SMP, 256-Way SMP • Power, HP-RISC, Alpha/Itanium, Pentium, Vector PE • Parallel/Single PE Optimization is Important Issue for Efficiency • Everyone knows that ... but it's a big task especially for application experts such as geophysics people in ACES community. • Machine dependent optimization/tuning required. • Simulation Methods such as FEM/FDM/BEM/LSM/DEM etc. have Typical Processes for Computation. • How about "Hiding" these Processes from Users ? • code development : efficient, reliable, portable, maintenance-free • line number of the source codes will be reduced • accelerates advancement of the applications (= physics)

  3. Background (cont.) • Current GeoFEM provides this environment • limited to FEM • not necessarily perfect • GRID as next generation HPC infrastructure • Currently, middlewares and protocols are being developed to enable unified interface to treat various OS, computers, ultra-speed network and database. • What are expected to GRID ? • Meta-computing : simultaneous use of supercomputers in the world • Volunteer-computing : efficient use of idling computers • Access Grid : research collaboration environment • Data Intensive Computing : computation with large-scale data • Grid ASP : application services on WEB

  4. Similar Research Groups • ALICE(ANL) • CCAforum(Common Component Architecture,DOE) • DOE/ASCI/Distributed Computing Research Team • ESI(Equation Solver Interface Standards) • FEI(The Finite Element/Equation Solver Interface Specification) • ADR(Active Data Repository)(NPACI)

  5. Are they successful ? It seems NO • Very limited targets, processes • Mainly for Optimization of Linear Solvers • Where are Interfaces between Applications and Libraries ? • Approach from Computer/Computational Science People • Not Really Easy to Use by Application People Computer/ Computational Science Applications -Linear solvers -Numerical Algorithms -Parallel Programming -Optimization -FEM -FDM -Spectral Methods -MD, MC -BEM

  6. Sparse Mat. Mult. Nonlinear Procedure FFT Eward Terms Example of HPC Middleware (1)Simulation Methods include Some Typical Processes O(N) Ab Initio MD

  7. Sparse Mat. Mult. Nonlinear Proc. MPP-C MPP-A FFT Eward Terms MPP-B Example of HPC Middleware (2)Individual Process could be optimized for Various Types of MPP Architectures O(N) Ab Initio MD

  8. Sparse Matrix Mult. Sparse Matrix Mult. Sparse Matrix Mult. Sparse Matrix Mult. Nonlinear Proc. Nonlinear Proc. Nonlinear Proc. Nonlinear Alg. FFT FFT FFT FFT Eward Terms Eward Terms Eward Terms Eward Terms Example of HPC Middleware (3)Use Optimized Libraries O (N) ab - initio M D

  9. Sparse Matrix Mult. Data for Analysis Model Nonlinear Proc. MPP-C Parameters of H/W MPP-A FFT Eward Terms MPP-B Example of HPC Middleware (4)- Optimized code is generated by special language/ compiler based on analysis data and H/W information.- Optimum algorithm can be adopted O (N) ab - initio M D Special Compiler

  10. analysis model space Example of HPC Middleware (5)- On network-connected H/W's (meta-computing)- Optimized for individual architecture- Optimum load-balancing O (N) ab - initio M D

  11. Ab-Initio MD Classical MD FEM Ab-Initio MD Classical MD FEM HPC Platform/Middleware HPC Platform/Middleware Modeling Data Assimilation Visualization Optimization Load Balancing Resource Management Example of HPC Middleware (6)Multi Module Coupling through Platform

  12. MPP-A MPP-C MPP-B PETAFLOPS on GRIDfrom GeoFEM's Point of View • Why? When? • Datasets (mesh, observation, result) could be distributed. • Problem size could be too large for single MPP system. • according to G.C.Fox, S(TOP500) is about 100 TFLOPS now ... • Legion • Prof.Grimshaw (U.Virginia) • Grid OS, Global OS • Can handle MPP's connected through network as one huge MPP (= Super MPP) • Optimization on Individual Architecture (H/W) • Load balancing according to machine performance and resource availability

  13. PETAFLOPS on GRID (cont.) • GRID + (OS) + HPC MW/PF • Environment for "Electronic Collaboration

  14. Pre-Processing Main Post-Processing Initial Mesh Data Data Input/Output Post Proc. Partitioning Matrix Assemble Visualization Linear Solvers Domain Specific Algorithms/Models "Parallel" FEM Procedure

More Related