1 / 10

NSF High Performance Computing (HPC) Activities

NSF High Performance Computing (HPC) Activities. Established a HPC SWOT Team as part of Cyber Planning Process (August 2005) Held Meeting to Request Information on HPC Acquisition Models (September 9, 2005) Held Meeting to Request Input on HPC Performance Requirements (October 18, 2005)

Download Presentation

NSF High Performance Computing (HPC) Activities

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NSF High Performance Computing (HPC) Activities • Established a HPC SWOT Team as part of Cyber Planning Process (August 2005) • Held Meeting to Request Information on HPC Acquisition Models (September 9, 2005) • Held Meeting to Request Input on HPC Performance Requirements (October 18, 2005) • Released Program Solicitation for 1 or 2 HPC machines (November 10, 2005) • Scheduled Future Releases of HPC Acquisition Solicitations (November 2007, 2008 and 2009)

  2. NSF High Performance Computing (HPC) Background • NSF planned to release one or more solicitations for the acquisition of high-performance computing (HPC) systems and support of subsequent HPC services.  • Prior to the release of the solicitation(s), NSF invited input on: 1. Processes for machine acquisition and service provision 2. Machine performance requirements of the S&E research community

  3. HPC System Acquisition and Service Provision Meeting • Goal: Receive feedback from machine vendors and resource providers on pros/cons of 3 possible acquisition models: • Solicitation for a RP(s) who then selects machine • Solicitation for RP-Vendor Team(s) • Separate solicitations for machine(s) and RP(s)

  4. Other Topics for Discussion • Metrics that could be used to define machine performance and reliability requirements • Selection criteria that might be used as a basis for proposal evaluation • Pros/Cons of acquiring an HPC system that meets a specified performance curve as a one time purchase or in phases • Strengths and weaknesses of alternatives such as leasing HPC systems.

  5. Participants • Vendors • Cray • DELL • Hewlett Packard • IBM • Intel • Linux Networx • Rackable Systems • SGI • Sun Microsystems • Other • Argonne National Lab • CASC • DOD HPCMP • Hayes Consulting • NCAR • ORNL • Raytheon • Universities • Case Western Reserve U. • Cornell Univ. • Georgia Inst. Of Technology • Indiana University • Louisiana State Univ. • NCSA • Ohio Supercomputer Center • Purdue • PSC • SDSC • TACC • Univ. of NC • Univ. of Utah • USC

  6. HPC System Acquisition and Service Provision Meeting • Outcome: • Vendors said any of the models would work for them but RP/Vendor Team least favored • RP said all three will work but favored Model 1 - Solicitation for RP(s) • Acquisition Solicitation used Model 1

  7. HPC System Performance Requirements Meeting • Goal: Obtain input from the S&E research community on: • Performance metrics appropriate for use in HPC system acquisition • Potential benchmark codes representative of classes of S&E applications

  8. BIO Participants • David Badder - Georgia Institute of Technology • James Beach - University of Kansas • James Clark - Duke University • William Hargrove - Oak Ridge National Lab • Gwen Jacobs - University of Montana • Phil LoCascio - Oak Ridge National Lab • B.S. Manjunath - UC Santa Barbara • Neo Martinez - Rocky Mountain Biological Lab • Dan Reed - University of North Carolina • Bruce Shapiro – JPL-NASA-Cal Tech • Mark Schildhauer – USSB - NCEAS

  9. HPC System Performance Meeting • S&E Community Comments: • Many S&E codes are “boutique” – not good for benchmarking • Machines that can run coupled codes are needed • Speed not the problem, latency is • Usability, staying up and running, a priority • HPC needs not uniform, e.g. faster, more • Flexibility/COTS cost of clusters/desktop systems make them systems of choice • Software a big bottleneck • Benchmarks should include 20-30 factors • systems more flexible, have replace specially Vendors said any of the models would work for them but RP/Vendor Team least favored • RP said all three will work but favored Model 1 - Solicitation for RP(s) • Acquisition Solicitation used Model 1

  10. HPC System Performance Meeting • Outcome: • More community workshops needed • Current solicitation uses a mixture of “tried and true” benchmarking codes

More Related