280 likes | 291 Views
This paper discusses the Jazz Linux cluster at ANL and the porting and performance of climate codes, such as CCSM, CAM, and MM5v3, on the cluster. It also covers regional climate modeling studies and tools for regional climate modeling.
E N D
Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions Argonne National Laboratory and Computation Institute University of Chicago
Outline • A description of the Jazz Linux cluster at Argonne National Laboratory • Porting and performance of climate codes on the Jazz Linux cluster • Community Climate Systems Model, CCSM 2.0.1 • Community Atmosphere Model, CAM 2.0.2 • Mesocale Meteorological Model, MM5v3.4 • Regional climate modeling studies at ANL • Long climate simulations on Jazz using MM5v3.6 • Tools for regional climate modeling ARGONNE NATIONAL LABORATORY
ANL Jazz Linux Cluster • Compute - 350 nodes, each with a 2.4 GHz Pentium Xeon • Memory - 175 nodes with 2 GB of RAM, 175 nodes with 1 GB of RAM • Storage - 20 TB of clusterwide disk: 10 TB GFS and 10 TB PVFS • Network - Myrinet 2000 • Linpack benchmark - ~ 1 TFLOP
Community Climate Systems Model - CCSM 2.0.1 • Download of standard release of CCSM 2.0.1 from NCAR web site • Current mpich release is compatible with multiple executables multiple data model used by CCSM • Build process needs modification e.g. to use mpif90 and mpicc wrappers • Environment variables must be included in shell • Use pgf90 compiler
Community Climate Systems Model - CCSM 2.0.1 • CCSM 2.0.1 runs well on Jazz – now at 3 years per wallclock day • Load balance could be further optimized on Jazz • CCSM 2.1 will include build modifications used to run CCSM on Jazz
Community Atmosphere ModelCAM 2.0.2 • Download of standard release of CAM 2.0.2 from NCAR web site • Makefile needs modification to use mpif90 and mpicc wrappers • Switch on 2-D finite volume dynamics • Assessed performance using 64,92,128,184 processors
Cam 2.0.2 With 2-D Fv Dynamics ARGONNE NATIONAL LABORATORY Acknowledgement: IBM Pwr3 data from Art Mirin, LLNL
Performance and Scaling • Performed the standard MM5 benchmark on the Jazz Linux cluster at ANL • Ported MM5 to Intel compilers on IA-32 and IA-64 • Added MPE calls to facilitate profiling on Jazz, TeraGrid, etc
MM5 Benchmark on Jazz at ANL Source: John Michalakes, NCAR
Regional Climate Modeling • Parallel regional climate model development and testing based on MM5v3.6 WRF • Contributing to PIRCS experiments • PIRCS 1b and currently PIRCS 1c 15 year run • Downscaling using boundary and initial conditions derived from high resolution CCM runs made at LLNL
Regional Climate Modeling • Testbed for regional climate simulation laboratory – Espresso interface • Delivering regional climate data using interactive web based tools • Performance testing and porting to the NSF TeraGrid
PIRCS 1-B EXPERIMENT ARGONNE NATIONAL LABORATORY • We are using Version 3 of the Penn State / NCAR MM5, with the OSU land surface model • Total precipitation results for the period June 1-July 31, 1993 are shown in the center panel • Note the agreement with both the NCEP reanalysis forcing data (left panel) and the NCDC half-degree Cressman analysis of observations (right panel). • We plan to use this experiment and the PIRCS 1a (1988 US drought) as primary test beds for further enhancements of model physics
PIRCS-1c June 1988 Temps • We are using Version 3 of the Penn State / NCAR MM5 at 52km grid resolution, with the OSU land surface model NCEP I boundary and initial condition data
Espresso Motivation • Large modeling systems are difficult to configure and run • Running complex scientific models can require substantial computing skills • Managing the computer science reduces the time available for doing science and limits what is possible e.g. MM5 requires many jobs to be submitted to setup and perform a one year run • Current approaches are prone to error (especially where the build process is complex)
Motivation (Cont.) • Contemporary software tools are not being exploited e.g. Java, XML, Globus Toolkit distributed computing, etc… • Provide secure access to remote supercomputing resources
Approach • Develop a flexible graphical user interface (GUI) with low maintenance and development costs • Incorporate modern software tools in order to dramatically increase flexibility and efficiency while reducing the chance of operator error • = Espresso !
Conclusions…. • Key climate modeling codes, CCSM, CAM, MM5v3 are performing well on the Jazz Linux cluster • Multi-year regional climate simulations can be achieved on existing IA-32 Linux supercomputers • Future • NSF TeraGrid (IA-64) • WACCM model with Atmospheric Chemistry code • Performing downscaling using high resolution global GCM data
Argonne Climate Modeling Group http://www-climate.mcs.anl.gov