100 likes | 129 Views
Explore strategies for acquiring high-performance computing systems, benchmarking codes, and vendor collaboration. Learn about the latest trends in HPC acquisition and performance requirements.<br>
E N D
NSF High Performance Computing (HPC) Activities • Established a HPC SWOT Team as part of Cyber Planning Process (August 2005) • Held Meeting to Request Information on HPC Acquisition Models (September 9, 2005) • Held Meeting to Request Input on HPC Performance Requirements (October 18, 2005) • Released Program Solicitation for 1 or 2 HPC machines (November 10, 2005) • Scheduled Future Releases of HPC Acquisition Solicitations (November 2007, 2008 and 2009)
NSF High Performance Computing (HPC) Background • NSF planned to release one or more solicitations for the acquisition of high-performance computing (HPC) systems and support of subsequent HPC services. • Prior to the release of the solicitation(s), NSF invited input on: 1. Processes for machine acquisition and service provision 2. Machine performance requirements of the S&E research community
HPC System Acquisition and Service Provision Meeting • Goal: Receive feedback from machine vendors and resource providers on pros/cons of 3 possible acquisition models: • Solicitation for a RP(s) who then selects machine • Solicitation for RP-Vendor Team(s) • Separate solicitations for machine(s) and RP(s)
Other Topics for Discussion • Metrics that could be used to define machine performance and reliability requirements • Selection criteria that might be used as a basis for proposal evaluation • Pros/Cons of acquiring an HPC system that meets a specified performance curve as a one time purchase or in phases • Strengths and weaknesses of alternatives such as leasing HPC systems.
Participants • Vendors • Cray • DELL • Hewlett Packard • IBM • Intel • Linux Networx • Rackable Systems • SGI • Sun Microsystems • Other • Argonne National Lab • CASC • DOD HPCMP • Hayes Consulting • NCAR • ORNL • Raytheon • Universities • Case Western Reserve U. • Cornell Univ. • Georgia Inst. Of Technology • Indiana University • Louisiana State Univ. • NCSA • Ohio Supercomputer Center • Purdue • PSC • SDSC • TACC • Univ. of NC • Univ. of Utah • USC
HPC System Acquisition and Service Provision Meeting • Outcome: • Vendors said any of the models would work for them but RP/Vendor Team least favored • RP said all three will work but favored Model 1 - Solicitation for RP(s) • Acquisition Solicitation used Model 1
HPC System Performance Requirements Meeting • Goal: Obtain input from the S&E research community on: • Performance metrics appropriate for use in HPC system acquisition • Potential benchmark codes representative of classes of S&E applications
BIO Participants • David Badder - Georgia Institute of Technology • James Beach - University of Kansas • James Clark - Duke University • William Hargrove - Oak Ridge National Lab • Gwen Jacobs - University of Montana • Phil LoCascio - Oak Ridge National Lab • B.S. Manjunath - UC Santa Barbara • Neo Martinez - Rocky Mountain Biological Lab • Dan Reed - University of North Carolina • Bruce Shapiro – JPL-NASA-Cal Tech • Mark Schildhauer – USSB - NCEAS
HPC System Performance Meeting • S&E Community Comments: • Many S&E codes are “boutique” – not good for benchmarking • Machines that can run coupled codes are needed • Speed not the problem, latency is • Usability, staying up and running, a priority • HPC needs not uniform, e.g. faster, more • Flexibility/COTS cost of clusters/desktop systems make them systems of choice • Software a big bottleneck • Benchmarks should include 20-30 factors • systems more flexible, have replace specially Vendors said any of the models would work for them but RP/Vendor Team least favored • RP said all three will work but favored Model 1 - Solicitation for RP(s) • Acquisition Solicitation used Model 1
HPC System Performance Meeting • Outcome: • More community workshops needed • Current solicitation uses a mixture of “tried and true” benchmarking codes