260 likes | 406 Views
Supercomputer Performance Characterization. Wayne Pfeiffer July 17, 2006. Here are some important computer performance questions. What key computer system parameters determine performance? What synthetic benchmarks can be used to characterize these system parameters?
E N D
SupercomputerPerformance Characterization Wayne Pfeiffer July 17, 2006
Here are some important computer performance questions • What key computer system parameters determine performance? • What synthetic benchmarks can be used to characterize these system parameters? • How does performance on synthetics compare between computers? • How does performance on applications compare between computers? • How does performance scale (i.e., vary with processor count)?
Comparative performance results have been obtainedon six computers at NCSA & SDSC,all with > 1,000 processors
These computers have shared-memory nodesof widely varying size connected by different switch types • Blue Gene • Massively parallel processor system with low-power, 2p nodes • Two custom switches for point-to-point and collective communication • Cobalt • Cluster of two large, 512p nodes (also called a constellation) • Custom switch within nodes & commodity switch between nodes • DataStar • Cluster of 8p nodes • Custom high-performance switch called Federation • Mercury, Tungsten, & T2 • Clusters of 2p nodes • Commodity switches
Performance can be better understood with a simple model • Total run time can be split into three components: ttot = tcomp + tcomm + tio • Overlap may exist. If so, it can be handled as follows: tcomp = computation time tcomm = communication time that can’t be overlapped with tcomp tio = I/O time that can’t be overlapped with tcomp & tcomm • Relative values vary depending upon computer, application, problem, & number of processors
Run-time components depend uponsystem parameters & code features Differences between point-to-point & collective communication are important too
Compute, communication, & I/O speeds have been measured for many synthetic & application benchmarks • Synthetic benchmarks • sloops (includes daxpy & dot) • HPL (Linpack) • HPC Challenge • NAS Parallel Benchmarks • IOR • Application benchmarks • Amber 9 PMEMD (biophysics: molecular dynamics) • … • WRF (atmospheric science: weather prediction)
Normalized memory access profiles for daxpyshow better memory access, but more memory contentionon Blue Gene compared DataStar
Each HPCC synthetic benchmark measures one or two system parameters in varying combinations
Relative speeds are shown for HPCC benchmarks on 6 computersat 1,024p; 4 different computers are fastest depending upon benchmark;2 of these are also slowest, depending upon benchmark Data available soon at CIP Web site: www.ci-partnership.org
Absolute speeds are shown for HPCC & IOR benchmarkson SDSC computers; TG processors are fastest, BG & DS interconnects are fastest, & all three computers have similar I/O rates
Relative speeds are shown for 5 applications on 6 computersat various processor counts; Cobalt & DataStar are generally fastest
Good scaling is essential to take advantageof high processors counts • Two types of scaling are of interest • Strong: performance vs processor count (p) for fixed problem size • Weak: performance vs p for fixed work per processor • There are several ways of plotting scaling • Run time (t) vs p • Speed (1/t) vs p • Speed/p vs p • Scaling depends significantly on the computer, application, & problem • Use log-log plot to preserve ratios when comparing computers
AWM 512^3 problem shows good strong scaling to 2,048pon Blue Gene & to 512p on DataStar, but not on TeraGrid cluster Data from Yifeng Cui
MILC medium problem shows superlinear speedupon Cobalt, Mercury, & DataStar at small processor counts;strong scaling ends for DataStar & Blue Gene above 2,048p
NAMD ApoA1 problem scales best on DataStar & Blue Gene;Cobalt is fastest below 512p, but the same speed as DataStar at 512p
WRF standard problem scales best on DataStar;Cobalt is fastest below 512p, but the same speed as DataStar at 512p
Communication fraction generally grows with processor count in strong scaling scans, such as for WRF standard problem on DataStar
A more careful look at Blue Gene shows many pluses +Hardware is more reliable than for other high-end systems installed at SDSC in recent years + Compute times are extremely reproducible + Networks scale well + I/O performance with GPFS is good at high p + Price per peak flop/s is low + Power per flop/s is low + Footprint is small
But there are also some minuses -Processors are relatively slow • Clock speed is 700 MHz • Compilers seldom use second FPU in each processor (though optimized libraries do) - Applications must scale well to get high absolute performance - Memory is only 512 MB/node, so some problems don’t fit • Coprocessor mode can be used (with 1p/node), but this is inefficient • Some problems still don’t fit even in coprocessor mode - Cross-compiling complicates software development for complex codes
Major applications ported and being run on BG at SDSC span various disciplines
Speed of BG relative to DataStar varies about clock speed ratio(0.47 = 0.7/1.5) for applications on ≥ 512p;CO & VN mode perform similarly (per MPI p)
DNS scaling on BG is generally better than on DataStar,but shows unusual variation; VN mode is somewhat slower than CO mode (per MPI p) Data from Dmitry Pekurovsky
If number of allocated processors is considered,then VN mode is faster than CO mode,and both modes show unusual variation Data from Dmitry Pekurovsky
IOR weak scaling scans using GPFS-WAN show BG in VN modeachieves 3.4 GB/s for writes (~DS) & 2.7 GB/s for reads (>DS)
Blue Gene has more limited applicability than DataStar,but is a good choice if the application is right + Some applications run relatively fast & scale well + Turnaround is good with only a few users + Hardware is reliable & easy to maintain - Other applications run relatively slowly and/or don’t scale well - Some typical problems need to run in CO mode to fit in memory - Other typical problems won’t fit at all