220 likes | 294 Views
George Tsouloupas, Marios D. Dikaiakos {georget,mdd}@ucy.ac.cy Dept. of Computer Science University of Cyprus. Characterization of Computational Grid Resources Using Low-level Benchmarks. Motivation.
E N D
George Tsouloupas, Marios D. Dikaiakos {georget,mdd}@ucy.ac.cy Dept. of Computer Science University of Cyprus Characterization of Computational Grid ResourcesUsing Low-level Benchmarks
Motivation • Information about the performance of computational Grid resources-- Essential for intelligent resource allocation. • Information in current, widely deployed systems is lacking (to say the least). • Goals: • Complement information found in information/monitoring systems. • Annotate resources with low-level performance metrics.
Approach • Determine a small set of simple, easy to understand and clearly defined performance metrics. • Identify or implement a set of minimally intrussive benchmarks to deliver these metrics. • Facilitate the easy management of benchmarking experiments conducted • Periodically or On-demand • (the focus of previous and on-going work)
Resource Characterization • Reliance on nominal values (e.g. MDS/BDII); memory size, CPU speed (Mhz), mumber of CPU's. • People DO recognize the need for measured performance though. (E.g. GlueHostBenchmarkSF00, GlueHostBenchmarkSI00 in the Glue-schema). • Current sources are potentially inaccurate • indeliberate or deliberate • Static information does not adequately reflect changes in HW/SW/Config changes.
Yesterday (4/12/06) on EGEE • >10% of Clusters had a SpecInt value of pricisely 1000 and ~15% of Clusters had a SpecInt value of pricisely 381 • ~40% publish a SpecFloat Value of 0! • The SI performance of a “P4” varies from 400 to 1515! • The SI performance of a Xeon processor ranges from 381 to 1840! • There is such a CPU model as a “P6” at one of the clusters
Characterization Through Benchmarking • End-to-End approach • Micro-benchmarks provide a commonly accepted basis for comparison. • Useful to different users for different reasons • End-users • Administrators • Capture a wide range of clearly-understood information with little overhead. • Portability to address heterogeneity
GridBench • A set of tools aiming to facilitate the characterization/performance evaluation and performance ranking of Grid resources. • Organize and manage benchmarking experiments • Running benchmarks • Collect and archive measurement • Provide metadata for the measurements • Analyse results / rank resources.
Metrics and Benchmarks • CPU • EPWhetstone: simple adaptation of the traditional Whetstone • mixture of operations: integer arithmetic, floating point arithmetic, function calls, trigonometric and other functions. • EPFlops: adapted from the “flops” benchmark • different mixes of floating-point operations • EPDhrystone: adapted from the C version of the “dhrystone” benchmark • Integer operations
Metrics and Benchmarks • Memory • EPMemsize: “benchmark” that aims to measure memory capacity • Attempt to determine max memory allocation without hitting swap. • EPStream: adaptation of the C implementation of the well-known STREAM memory benchmark • copy, scale, sum and triad. • CacheBench: evaluate the performance of the local memory hierarchy
Metrics and Benchmarks • Interconnect (MPI) • MPPTest: MPI-implementation independent. Used for: • Latency, point-to-point bandwidth, bisection bandwidth • I/O • b_eff_io: evaluate the shared I/O performance of (shared) storage • Several file access patterns
Experiments: Memory Performance Memory Bandwidth Cache performance SMP/multi-core Memory performance
Experiments: MPI, I/O Basic MPI performance Parallel Disk I/O performance
Practical Issues • Using MPI to perform these experiments exposed problems • Using MPI wrappers for benchmarks will be revisited. • E.g. outdated OpenSSH keys in a sigle WN will break MPI applications • Run-away/rogue processes will taint results, but abnormal results will expose this (often overlooked by tests such as SFT/SAM) • The cause for problems not always identified but at least problems affecting performance are detected.
Summary • Presented a concise set of metrics and associated benchmarks to characterize CPU, memory, interconnect and IO. • The approach imposes little overhead on the infrastructure • Low-level performance metrics can be an aid • resource selection (ranking) when mapping application kernels to appropriate resources • Validation of resource operational state • Verify “advertised” resource performance.
WIP and Future Work • Removed reliance on MPI, follow a “sampling” approach • Advanced measurement scheduling • Result filtering • Ranking Models
Questions? Thanks!