120 likes | 239 Views
HPC Benchmarking and Performance Evaluation With Realistic Applications Brian Armstrong, Hansang Bae, Rudolf Eigenmann, Faisal Saied, Mohamed, Sayeed, Yili Zheng Purdue University. Benchmarking has two important goals 1. Assess the performance of High-Performance Computer Platforms
E N D
HPC Benchmarking and Performance Evaluation With Realistic ApplicationsBrian Armstrong, Hansang Bae, Rudolf Eigenmann, Faisal Saied, Mohamed, Sayeed, Yili ZhengPurdue University Benchmarking has two important goals 1. Assess the performance of High-Performance Computer Platforms 2. Measure and show opportunities for progress in HPC this is important for machine procurements and for understanding where HPC technology is heading this is important to quantify and compare scientific research contributions and to set new directions for research
Why Talk About Benchmarking? There is no progress if you can’t measure it • use benchmark applications unknown to others; give no reference • use applications that have the same name as known benchmarks, but that show better performance of your innovation • use only those benchmarks out of a suite that show good performance on your novel technique • use only the benchmarks out of the suite that don’t break your technique • modify the benchmark source code • change data set parameters • use the “debug” data set • use a few loops out of the full programs only • measure loop performance but label it as the full application • don’t mention in your paper why you have chosen the benchmarks in this way and what changes you have made • time the interesting part of the program only; exclude overheads • measure interesting overheads only, exclude large unwanted items 12 ways to fool the scientist (with computer performance evaluation)
Benchmarks Need to be Representative and Open • Representative Benchmarks: • Represent real problems • Open Benchmarks: • No proprietary strings attached • Source code and performance data can be freely distributed With these goals in mind, SPEC’s High-performance group was formed in 1994
Why is Benchmarking with Real Application Hard? • Simple Benchmarks are Overly Easy to Run • Realistic Benchmarks Cannot be Abstracted from Real Applications • Today's Realistic Applications May Not be Tomorrows Applications • Benchmarking is not Eligible for Research Funding • Maintaining Benchmarking Efforts is Costly • Proprietary Full-Application Benchmarks Cannot Serve as Yardsticks
SPEC HPC2002 • Includes three (suites of) codes • SPECchem, used in chemical and pharmaceutical industries (gamess) • 110,000 lines of Fortran and C • SPECenv, weather forecast application (WRF) • 160,000 lines of Fortran and C • SPECseis, used in the search for oil and gas • 20,000 lines of Fortran and C • All codes include several data sets and are available in a serial and a parallel variant (MPI, OpenMP, hybrid execution is possible). • SPEC HPC used in TAP list - Top Application Performers • the rank list of HPC systems based on realistic applications • www.purdue.edu/TAPlist Emphasis on most realistic applications, no programming model favored
Ranklist of Supercomputers based on Realistic Applications (SPEC HPC, medium data set)
I/O BehaviorDisk Read/Write Times and Volumes Only SPECseis has parallel I/O. SPECenv and SPECchem perform I/O on a single processor. HPL has no I/O
Conclusions • There is a dire need for basing performance evaluation and benchmarking results on realistic applications. • The SPEC HPC the main criteria for real-application benchmarking: relevance and openness. • Kernel benchmarks are the best choices for measuring individual system components. However, there is a large range of questions that can only be answered satisfactorily using real-application benchmarks. • Benchmarking with real applications is hard and there are many challenges, but there is no replacement.