260 likes | 413 Views
What Programming Paradigms and algorithms for Petascale Scientific Computing, a Hierarchical Programming Methodology Tentative. Serge G. Petiton June 23rd, 2008. Japan-French Informatic Laboratory (JFIL). Outline. Introduction Present Petaflops, on the Road to Future Exaflops
E N D
What Programming Paradigms and algorithms for Petascale Scientific Computing, a Hierarchical Programming Methodology Tentative Serge G. Petiton June 23rd, 2008 Japan-French Informatic Laboratory (JFIL)
Outline • Introduction • Present Petaflops, on the Road to Future Exaflops • Experimentations, toward models and extrapolations • Conclusion PAAP
Outline • Introduction • Present Petaflops, on the Road to Future Exaflops • Experimentations, toward models and extrapolations • Conclusion PAAP
Introduction The Petaflop frontier was crossed (May 25-26 night) – top500 Sustained Petaflop would be soon available by a large number of computers As scheduled since the 9Oth, we didn’t really have large technological gaps to access Petaflops computers Language and tools are not so different since first SMPs What about languages, tools, methods for sustained 10 Petaflops Exaflop would probably ask for new technology advancements and new ecosystems On the road toward Exaflops, we would soon face difficult challenges and we have to anticipate new problems around the 10 Petaflop frontier. PAAP
Outline • Introduction • Present Petaflops, on the Road to Future Exaflops • Experimentations, toward models and extrapolations • Conclusion PAAP
Hyper Large Scale Hierarchical Distributed Parallel Architectures • Many-cores ask for new programming paradigm, as data parallel, • Message passing would be efficient for gang of cluster, • Workflow and Grid-like programming may be a solution for the higher level programming, • Accelerators, vector computing, • Energy consumption optimization, • Optical networks, • “Inter” and “intra” (chip, cluster, gang,….) communications • Distributed/Shared Memory computer on a chip. PAAP
On the road from Petaflop toward Exaflop • Multi programming and execution paradigms, • Technological and software challenge : compilers, systems, middleware, schedulers, fault tolerance,… • New applications and Numerical Methods, • Arithmetic and elementary function (multiple and hybrids) • Data distributed on networks and grids, • Education challenges, we have to educate scientists PAAP
and the road would be dificult…. • Multi-level programming paradigms, • Component Technologies, • Mixed data migration and computing, with large instrument control, • We have to use end-users expertise, • Indeterminist distributed computing, component dependence graph, • Middleware and Platform independent • “Time to solution” minimization, new metrics • We have to allow end-users to propose scheduler assistance and to give some advice to anticipate data migration data PAAP
Outline • Introduction • Present Petaflops, on the Road to Future Exaflops • Experimentations, toward models and extrapolations • Conclusion PAAP
YMLLanguage Front end : Depends only of the applications Back end : depends of middleware. Ex. XtermWeb (F), OmniRPC (Jp), and Condor (USA). PAAP http://yml.prism.uvsq.fr/
Components/Tasks Graph Dependency par compute tache1(..); signal(e1); // compute tache2(..);migrate matrix(..); signal(e2); // wait(e1 and e2); Par compute tache3(..); signal(e3); // compute tache4(..); signal(e4); // compute tache5(..);control robot(..); signal(e5);visualize mesh(…) ; end par // wait(e3 and e4 and e5); compute tache6(..); compute tache7(..); end par Résultat A Generic component node Begin node End node Graph node PAAP Dependence
YML/LAKe PAAP
Block Gauss-Jordan, 101 processor Cluster, Grid 5000; YML versus YML/OmniRPC (with Maximes Hugues (TOTAL and LIFL)) Time Taille de bloc = 1500 We optimize the « Time to Solution » Several middleware may be choose Number of Blocks PAAP
GRID 5000, BGJ,10, 101 nœuds, YML versus YML/OmniRPC Block sizes = 1500 PAAP
BGJ, YML/OmniRPC versus YML Block Size = 1500 PAAP
Asynchronous Restarted Iterative Methods on multi-node computers With Guy Bergère, Zifan Li, and Ye Zhang (LIFL) PAAP
Convergence on GRID 5000 Residual Norm PAAP Time (second)
One or two distributed sites, same number of processors, communication overlay One site Two sites PAAP
Cell/GPU CEA/DEN : with Christophe Calvin et Jérome Dubois (CEA/DEN Saclay) • MINOS/APOLLO3 solver • Netronic tranport problem • Power Method to compute the dominante eigenvalue • Slow convergence • Large number of floating point operations • Experimentations on : • Bi-Xeons quadcore 2.83GHz (45 GFlops) • CellBlade (Cines Montpellier) (400 GFlops) • GPU Quadro FX 4600 (240 GFlops) PAAP
Matrix size Power method : Performances PAAP 21
Difference Power Method : Arithmetic Accuracy PAAP 22
Matrix Size Arnoldi Projection: Performances PAAP 23
Orthogonalization degradation Arnoldi Projection : Arithmetic Accuracy PAAP 24
Outline • Introduction • Present Petaflops, on the Road to Future Exaflops • Experimentations, toward models and extrapolations • Conclusion PAAP
Conclusion • We plan to extrpolate from Grid5000 and our multi-core experimentations some behaviors of the future hiearachical large petascale computers, using YML for the higher level, • We need to propose new high-level languages to program large Petaflop computers, to be able to minimize “Time to Solution” and energy consumptions, with system and middleware independencies, MPI would probably very difficult to dethrone, • Other important codes would be still carefully “hand-optimized” • Several Programming paradigms, with respect to the different level, have to me mixed. The interface have to be well-specified; MPI would probably very difficult to dethrone, • End-users have to be able to give expertise to help middleware management such as scheduling, and to chose libraries • New Asynchronous Hybrid Methods have to be introduced PAAP