1 / 25

A High-Performance Framework for Earth Science Modeling and Data Assimilation

ESMF. A High-Performance Framework for Earth Science Modeling and Data Assimilation. V. Balaji (vb@gfdl.gov), SGI/GFDL First ESMF Community Meeting Washington, 30 May 2002 NASA /GSFC. Outline. Background ESMF Objectives and Scientific Benefits ESMF Overview

Download Presentation

A High-Performance Framework for Earth Science Modeling and Data Assimilation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ESMF A High-Performance Framework for Earth Science Modeling and Data Assimilation V. Balaji (vb@gfdl.gov), SGI/GFDL First ESMF Community Meeting Washington, 30 May 2002NASA/GSFC

  2. Outline • Background • ESMF Objectives and Scientific Benefits • ESMF Overview • Final Milestones • Development Plan • Beyond 2004: ESMF Evolution

  3. Technological Trends In climate research and NWP... increased emphasis on detailed representation of individual physical processes; requires many teams of specialists to contribute components to an overall coupled system In computing technology... increase in hardware and software complexity in high-performance computing, as we shift toward the use of scalable computing architectures

  4. Technological Trends In software design for broad communities...The open source community provided a viable approach to constructing software to meet diverse requirements through “open standards”. The standards evolve through consultation and prototyping across the user community. “Rough consensus and working code.” (IETF)

  5. Community Response Modernization of modeling software Abstraction of underlying hardware to provide uniform programming model across vector, uniprocessor and scalable architecturesDistributed development model characterized by many contributing authors; use of high-level language features for abstraction to facilitate development processModular design for interchangeable dynamical cores and physical parameterizations, development of community-wide standards for components Development of prototype frameworks GFDL (FMS), NASA/GSFC (GEMS). Other framework-ready packages: NCAR/NCEP (WRF), NCAR/DOE (MCT) The ESMF aims to unify and extend these efforts

  6. Framework examples • Atmosphere and ocean model grids. Shared the same model grid, fluxes passed by common block.Under a framework, independent model grids are connected by a coupler. • Parallel implementation of legacy code. Take the old out-of-core solver, replace write-to-disk, read-from-disk with shmem_put, shmem_get. Under a framework, there is a uniform way to specify and fulfil data dependencies through high-level calls.

  7. The ESMF Project • The need to unify and extend current frameworks into a community standard achieves wide currency. • NASA offers to underwrite the development of a community framework. • A broad cross-section of the community meets and agrees to develop a concerted response to NASA, uniting coupled models and data assimilation in a common framework. • Funding began February 2002: $10 million over three years.

  8. Part I Core Framework Development NSF NCAR PI Part I Proposal Specific Milestones Joint Milestones Joint Milestones Joint Milestones Project Organization NASA ESTO/CT Part II Part III Data Assimilation Deployment Prognostic Model Deployment MIT PI NASA DAO PI Part II Proposal Specific Milestones Part III Proposal Specific Milestones Joint Specification Team Requirements Analysis System Architecture API Specification

  9. Design Principles Modularity data-hiding, encapsulation, self-sufficiency; Portability adhere to official language standards, use community-standard software packages, comply with internal standards Performance minimize abstraction penalties of using a framework Flexibility address a wide variety of climate issues by configuring particular models out of a wide choice of available components and modules Extensibility design to anticipate and accommodate future needs Community encourage users to contribute components, develop in open source environment

  10. Application Architecture Coupling Layer ESMF Superstructure Model Layer User Code Fields and Grids Layer ESMF Infrastructure Low Level Utilities External Libraries BLAS, MPI, netCDF, …

  11. Sample call structure Initialize ESMF initialize atmos, ocean get grids for atmos, ocean initialize regrid time loop User Code Atmos and ocean models must provide return_grid call ESMF halo update, I/O etc ESMF Infrastructure provides halo update, I/O

  12. Framework Architecture

  13. Superstructure: CONTROL • An MPMD ocean and atmosphere register their presence (“call ESMF_Init”). A separate coupler queries them for inputs and outputs, and verifies the match. • “In an 80p coupled model, assign 32p to the atmosphere component, running concurrently with the ocean on the other 48. Run the land model on the atmos PEs, parallelizing river-routing across basins, and dynamic vegetation on whatever’s left.” • Possible extension of functionality: “Get me CO2 from the ocean. If the ocean component doesn’t provide it, read this file instead.”

  14. Superstructure: COUPLER • “How many processors are you running on?” • “What surface fields are you willing to provide?” • “Here are the inputs you requested. Integrate forward for 3 hours and send me back your surface state, accumulated at the resolution of your timestep.” • Possible extension of functionality: define a standard “ESMF_atmos” datatype.

  15. Infrastructure: GRID • Non-blocking halo update: call halo_wait() … call halo_update() • Bundle data arrays for aggregate data transfer. • Redistribute data arrays on a different decomposition. • Possible extension of functionality: 3D data decomposition.

  16. Infrastructure: GRID • Given this function for the grid curvature, generate all the metrics (dx, dy, etc) needed for dynamics on a C-grid. • Refine an existing grid. • Support for various grid types (cubed-sphere, tripolar, …) • Possible extension of functionality: Standard differential operators (e.g grad(phi), div(u) )

  17. Infrastructure: REGRID • ESMF will set a standard for writing grid overlap information between generalized curvilinear grids. • Clip cells as needed to align boundaries on non-aligned grids (e.g coastlines). • Support for various grid types (cubed-sphere, tripolar, …) • Possible extension of functionality: efficient runtime overlap generation for adaptive meshes.

  18. General features • ESMF will be usable by models written in F90/C/C++. • ESMF will be usable by models requiring adjoint capability. • ESMF will be usable by models requiring shared or distributed memory parallelism semantics. • ESMF will support SPMD and MPMD coupling. • ESMF will support several I/O formats (including GRIB/BUFR, netCDF, HDF). • ESMF will have uniform syntax across platforms.

  19. ESMF Infrastructure Distributed grid operations (transpose, halo, etc.) Physical grid specification, metric operations. Regridding: interpolation of data between grids, ungridded data. Fields: association of metadata with data arrays. Loose and packed field bundles. I/O: on distributed data Management of distributed memory, data-sharing for shared and distributed memory. Time management, alarms, time and calendar utilities Performance profiling and logging, adaptive load-balancing. Error handling ESMF Superstructure Control: assignment of components to processor sets, scheduling of components and inter-component exchange. Inter-component signals, including checkpointing of complete model configurations. Couplers and gridded components: Validation of exchange packets. Blocking and non-blocking transfer of boundary data between component models. Conservation verification. Specification of required interfaces for components. Building blocks

  20. Target Platforms ESMF will target broad range of platforms • Major center hardware, e.g. • SP, SGI O3K, Alpha • 1000+ processors • Commodity hardware, e.g. • Linux clusters, desktops

  21. Joint Milestone Codeset I

  22. Joint Milestone Codeset II

  23. Joint Milestone Codeset III

  24. Interoperability Demo 3 interoperability experiments completed in 2004, 5 by 2005.

  25. Beyond 2004:ESMF Evolution • Maintenance, support and management • NCAR commitment to maintain and support core ESMF software • Seeking inter-agency commitment to develop ESMF • Technical evolution • Functional extension: • Support for advanced data assimilation algorithms: error covariance operators, infrastructure for generic variational algorithms, etc. • additional grids, new domains • Earth System Modeling Environment, including web/GUI interface, databases of components and experiments, standard diagnostic fields, standard component interfaces.

More Related