850 likes | 1.01k Views
Grid Computing in Multidisciplinary CFD optimization problems. The challenge of Multi-physics Industrial Applications. Parallel CFD Conference, Moscow (RU). Toan NGUYEN. Project OPALE. May 13-15th, 2003. OUTLINE. • INRIA. • STATE OF THE ART. • PARALLEL CFD OPTIMIZATION.
E N D
Grid Computing in Multidisciplinary CFD optimization problems The challenge of Multi-physics Industrial Applications Parallel CFD Conference, Moscow (RU) Toan NGUYEN ProjectOPALE May 13-15th, 2003
OUTLINE • INRIA • STATE OF THE ART • PARALLEL CFD OPTIMIZATION • MULTIDISCIPLINARY APPLICATIONS • CURRENT ISSUES • FUTURE TRENDS & CONCLUSION
PART 1 http://www.inria.fr
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE National Research Institute for Computer Science and Automatic Control Created 1967 French Scientific and Technological Public Institute Ministry of Research and Ministry of Industry
INRIA MISSIONS • Fundamental and applied research • Design experimental systems • Technology transfer to industry • Knowledge transfer to academia • International scientific collaborations • Contribute to international programs • Technological assessment • Contribute to standards organizations
PERSONNEL 2.500 in six Research Centers Rocquencourt • 900 permanent staff • 400 researchers • 500 engineers, technicians and administrative pers. • 500 researchers from other organizations • 600 trainees, PhD and post-doctoral students • 100 external collaborators • 400 visiting researchers from abroad Lorraine Rennes Rhône-Alpes Futurs Sophia Antipolis Budget 120 MEuros (tax not incl.) 25% self-funding through 600 contracts
CHALLENGES • Expertise to program, compute and communicate using the Internet and heterogeneous networks • Design new applications using the Web and multimedia databases • Expertise to develop robust software • Design and master automatic control for complex systems • Combine simulation and virtual reality
APPLICATIONS • Telecommunications and multimedia • Healthcare and biology • Engineering • Transportation • Environment
RESEARCH PROJECTS • Teams of approx. 20 researchers • Medium-term objectives and work program (4 years) • Scientific and financial independence • Links and partnerships with scientific and industrial partners on national and international basis • Regular assessment of results during given time-scale
PROJECTS • 99 Projects in four themes: • 1 . Networks and Systems • 2 . Software Engineering and Symbolic Computing • 3 . Human-Computer Interaction, Image Processing,Data Management, Knowledge Systems • 4 . Simulation and Optimizationof Complex Systems
INTERNATIONAL COOPERATION • Develop collaborations with European research centres andindustries & strengthen the European scientific community in Information & Communication Technologies • Increase international collaborations and enhance exchanges • Cooperations with the United States, Japan, Russia • Relations with China, India, Brazil, etc. • Partnerships with developing countries • World Wide Web Consortium (W3C) • Work with the best industrial partners worldwide
OPALE • INRIA project (January 2002) • Follow-up SINUS project • Located Sophia-Antipolis & Grenoble • Areas NUMERIC OPTIMISATION (genetic, hybrid, …) MODEL REDUCTION (hierarchic, multi-grids, …) INTEGRATION PLATFORMS Coupling, distribution, parallelism, grids, clusters, ... APPLICATIONS : aerospace, electromagnetics, …
PART 2 STATE OF THE ART
GRID COMPUTING • THE GRIDBUS PROJECT (Univ. Melbourne, Australia)
GRID COMPUTING • RESOURCE MANAGEMENT • INFORMATION SERVICES • DATA MANAGEMENT
APPLICATIONS National Partnership for Advanced Computational Infrastructure
GRID COMPUTING • HIGH PERFORMANCE COMPUTING • HIGH THROUGHPUT COMPUTING • PETA-DATA MANAGEMENT • LONG DURATION APPLICATIONS
GRID COMPUTING • HIGH-PERFORMANCE PROBLEM SOLVING ENVIRONMENTS • BUSINESS TO BUSINESS & E-COMMERCE • LARGE SCALE SCIENTIFIC APPLICATIONS • ENGINEERING, BIO-SCIENCES, EARTH & CLIMATE MODEL. • AFFORDABLE HIGH-PERFORMANCE COMPUTING • IRREGULAR AND DYNAMIC BEHAVIOR APPLICATIONS
GRID COMPUTING • OPTIMALGRID PROJECT (IBM Almaden Resarch Center)
GRID COMPUTING PERFORMANCE DIRECTED MANAGEMENT • DISCOVERY, SHARING, COORDINATED USE, MONITORING • DISTRIBUTED HETERO. DYNAMIC RESOURCES & SERVICES • PERFORMANCE, SECURITY, SCALABILITY, ROBUSTNESS • DYNAMIC MONITORING • ADAPTIVE RESOURCE CONTROL • ERROR AMPLIFIER SYNDROM
GRID COMPUTING • PLANNING & ADAPTING DISTRIBUTED APPLICATIONS LOCATION TRANSPARENCY, MULTIPLE PROTOCOL BINDINGS CREATE & COMPOSE DISTRIBUTED SYSTEMS • NEED ENQUIRY, REGISTRATION PROTOCOLS GRID SERVICES (OGSA) • BROKERING, FAULT DETECTION & TROUBLESHOOTING COMPATIBLE UNDERLYING PLATFORMS • CACHING, MIGRATING, REPLICATING DATA APPLICATIONS : HIGH ENERGY PHYSICS (DATAGRID, PPDG, GriPhyN)
GRID COMPUTING GRID Research, Integration, Deployment & Support center • NSF Middleware Initiative : Globus, Condor-G, NWS, KX509, GSI-SSH, GPT • ISI, Univ. Chicago, NCSA, SDSC, Univ. Wisconsin Madison • NSF, Dept Energy, DARPA, NASA GOAL : « national middleware infrastructure to permit seamless resource sharing across virtual organizations » PHILOSOPHY : « the whole is greater than the sum of its parts » APPLICATIONS : NEES, GriPhyN, Intl Virtual Data Grid Lab (ATLAS)
GRID COMPUTING Incentives Incentives • SOFTWARE DEV. : FREE OPEN SOURCE (Linux, FreeBSD) • PARALLEL & DISTRIBUTED PROGRAMMING • BEOWULF CLUSTERS • HIGH-SPEED GIGABITS/SEC NETWORKS • COMPONENT PROGRAMMING • DEVELOPMENT LARGE DISTRIBUTED DATA FILE SYTEMS
BEOWULF CLUSTER PC-cluster at INRIA Rhône-Alpes (216 Pentium III procs.)
GRIDS vs. CLUSTERS «Grid is a type of parallel and distributed system that enables the sharing, selection, and aggregation of resources distributed across multiple administrative domains, based on their (resources) availability, capability, performance, cost and users' quality-of-service requirements. If distributed resources happen to be managed by a single, global centralised scheduling system, then it is a cluster. In cluster, all nodes work cooperatively with common goal and objective as the resource allocation is performed by a centralised, global resource manager. In Grids, each node has its own resource manager and allocation policy. » Rajkumar Buyya (Grid Infoware)
DISTRIBUTION vs. PARALLELISM • PARALLELISM IS NOT DISTRIBUTION YOU CAN RUN SEQUENTIALLY PARALLEL CODES • DISTRIBUTION SUPPORTS A LIMITED FORM PARALLELISM YOU CAN DISTRIBUTE SEQUENTIAL CODES YOU CAN RUN SEQUENTIAL CODES IN « PARALLEL » • PARALLELISM ALLOWS DISTRIBUTION YOU CAN DISTRIBUTE PARALLEL CODES • GLOBUS WILL NOT PARALLELIZE YOUR CODE
WHERE WE ARE TODAY Bits and pieces… • 1980 : one year CPU time • 1992 : one month « » • 1997 : four days « » • 2002 : one hour « » Moore’s law results… • Earth Sim (Japan) : 5.120 NEC procs • ASCI Q (LANL) : 11.968 HP Alpha procs • ASCI White (LLNL) : 8.192 IBM SP Power 3 procs • MCR Linux (LLNL) : 2.304 Intel 2.4 GHz Xeon procs
DISTRIBUTED SIMULATION PLATFORM What is required... • • MULTI-DISCIPLINE PROBLEM SOLVING ENVIRONMENTS • • HIGH-PERFORMANCE & TRANSPARENT DISTRIBUTION • • USING CURRENT COMMUNICATION STANDARDS • • USING CURRENT PROGRAMMING STANDARDS • • WEB LEVEL USER INTERFACES • • OPTIMIZED LOAD BALANCING & COMMUNICATION FLOW
INTEGRATIONPLATFORMS What they are... • COMMON DEFINITION, CONFIGURATION, DEPLOYMENT, EXECUTION & MONITORING ENVIRONMENT • COLLABORATIVE APPLICATIONS Distributed tasks interacting dynamically in controlled and formally provable way • CODE-COUPLING FOR HETEROGENEOUS SOFTWARE • DISTRIBUTED : LAN, WAN, HSN... • TARGET HARDWARE : NOW, COW, PC clusters, ... • TARGET APPLICATIONS : multidiscipline engineering, ...
DISTRIBUTED OBJECTS ARCHITECTURE SOFTWARE COMPONENTS • COMPONENTS ENCAPSULATE CODES • COMPONENTS ARE DISTRIBUTED OBJECTS • WRAPPERS AUTOMATICALLY (?) GENERATED • DISTRIBUTED PLUG & PLAY
« CAST » INTEGRATION PLATFORM CAST OPTIMIZERS SOLVERS Modules Modules Server Wrapper Wrapper CORBA
SOFTWARE COMPONENTS • BUSINESS COMPONENTS LEGACY SOFTWARE • OBJECT-ORIENTED COMPONENTS C++, PACKAGES, ... • DISTRIBUTED OBJECTS COMPONENTS Java RMI, EJB, CCM, ... • CASUAL METACOMPUTING COMPONENTS ?
DISTRIBUTED OBJECTS ARCHITECTURE SOFTWARE CONNECTORS • COMPONENTS COMMUNICATE THROUGH SOFTWARE CONNECTORS • CONNECTORS ARE SYNCHRONISATION CHANNELS • SEVERAL PROTOCOLS - SYNCHRONOUS METHOD INVOCATION - ASYNCHRONOUS EVENT BROADCAST • CONNECTORS = DATA COMMUNICATION CHANNELS
PARALLEL APPLICATIONS The good news…. • PARALLEL and/or DISTRIBUTED HARDWARE • // SOFTWARE LIBRARIES : MPI, PVM, SciLab //, ... • NEW APPLICATION METHODOLOGIES DOMAIN DECOMPOSITION GENETIC ALGORITHMS GAME THEORY HIERARCHIC MULTI-GRIDS • NESTING SEVERAL DEGREES PARALLELISM
NESTING PARALLELISM LEVERAGE OPTIMISATION STRATEGIES • COMBINE SEVERAL APPROACHES DOMAIN DECOMPOSITION GENETIC ALGORITHMS … • // SOFTWARE LIBRARIES : MPI, ... • GRIDS PC-CLUSTERS
ADVANCES IN HARDWARE The best news…. • HIGH-SPEED NETWORKS : ATM, FIBER OPTICS... Gigabits/sec networks available (2.5, 10, …) • PC & Multiprocs CLUSTERS : thousands GHz procs... • Lays the ground for GRIDS and METACOMPUTING GLOBUS, LEGION CONDOR, NETSOLVE
CLUSTER COMPUTING PC-cluster at INRIA Rhône-Alpes (216 Pentium III + 200 Itanium procs. Linux)
PART 3 PARALLEL CFD OPTIMIZATION
« CAST » INTEGRATION PLATFORM COLLABORATIVE APPLICATIONS SPECIFICATION TOOL GOALS • “DECISION” CORBA INTEGRATION PLATFORM COLLABORATIVE MULTI-DISCIPLINE OPTIMISATION • DESIGN FUTURE HPCN OPTIMISATION PLATFORMS • TESTBED GENETIC & PARALLEL OPTIMISATION ALGORITHMS CODE COUPLING FOR CFD, CSM SOLVERS & OPTIMISERS
TEST CASE • SHOCK-WAVE INDUCED DRAG REDUCTION • WING PROFILE OPTIMISATION (RAE2822) • Euler eqns (Mach 0.84, aoa = 2°) + BCGA (100 gen.) • 2D MESH : 14747 nodes, 29054 triangles • 4.5 hours CPU time (SUN Micro SPARC 5, Solaris 2.5) • 2.5 minutes CPU time (PC cluster 40 bi-procs, Linux)
TEST CASE WING PROFILE OPTIMISATION
CAST DISTRIBUTED INTEGRATION PLATFORM RENNES n CFD solvers PC cluster VTHD Gbits/s network GA optimiser CAST PC cluster PC cluster software GRENOBLE NICE
APPLICATION EXAMPLE MULTI-ELEMENT WING PROFILE OPTIMISATION
APPLICATION EXAMPLE WING GEOMETRY
APPLICATION EXAMPLE OPTIMISATION STRATEGY
APPLICATION EXAMPLE PERFORMANCE DATA 1h 35 mn 6 mn
APPLICATION EXAMPLE PERFORMANCE DATA
APPLICATION EXAMPLE PERFORMANCE DATA