190 likes | 432 Views
Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing. Alexandru Iosup , Nezih Yigitbasi, Dick Epema. Simon Ostermann, Radu Prodan, Thomas Fahringer. Parallel and Distributed Systems Group, Delft University of Technology, The Netherlands.
E N D
Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing Alexandru Iosup, Nezih Yigitbasi, Dick Epema Simon Ostermann, Radu Prodan, Thomas Fahringer Parallel and Distributed Systems Group, Delft University of Technology, The Netherlands Distributed and Parallel Systems, University of Innsbruck, Austria Berkeley, CA, USA
About the Team • Team’s Recent Work in Performance • The Grid Workloads Archive (Nov 2006) • The Failure Trace Archive (Nov 2009) • The Peer-to-Peer Trace Archive (Apr 2010) • Tools: GrenchMark workload-based grid benchmarking, other Monitoring and Perf. Eval. tools • Speaker: Alexandru Iosup • Systems work: Tribler (P2P file sharing), Koala (grid scheduling), POGGI and CAMEO (massively multiplayer online gaming) • Grid and Peer-to-Peer workload characterization and modeling
Many-Tasks Scientific Computing • Jobs comprising Many Tasks (1,000s) necessary to achieve some meaningful scientific goal • Jobs submitted as bags-of-tasks or over short periods of time • High-volume users over long periods of time • Common in grid workloads [Ios06][Ios08] • No practical definition (from “many” to “10,000/h”)
The Real Cloud • “The path to abundance” • On-demand capacity • Cheap for short-term tasks • Great for web apps (EIP, web crawl, DB ops, I/O) VS Tropical Cyclone Nargis (NASA, ISSS, 04/29/08) http://www.flickr.com/photos/dimitrisotiropoulos/4204766418/ • “The killer cyclone” • Not so great performance for scientific applications1 (compute- or data-intensive) • Long-term perf. variability2 1- Iosup et al., Performance Analysis of Cloud Computing Services for Many Tasks Scientific Computing, (under submission). 2- Iosup et al., On the Performance Variability of Production Cloud Services, Technical Report PDS-2010-002, [Online] Available:http://pds.twi.tudelft.nl/reports/2010/PDS-2010-002.pdf
Research Question and Previous Work Do clouds and Many-Tasks Scientific Computing fit well, performance-wise? • Virtualization Overhead • Loss below 5% for computation [Barham03] [Clark04] • Loss below 15% for networking [Barham03] [Menon05] • Loss below 30% for parallel I/O [Vetter08] • Negligible for compute-intensive HPC kernels [You06] [Panda06] • Cloud Performance Evaluation • Performance and cost of executing a sci. workflows [Dee08] • Study of Amazon S3 [Palankar08] • Amazon EC2 for the NPB benchmark suite [Walker08] or selected HPC benchmarks [Hill08] Theory: just use virtualization overhead results. Practice?
Agenda • Introduction & Motivation • Proto-Many Task Users • Performance Evaluation of Four Clouds • Clouds vs Other Environments • Take Home Message
Proto-Many Task Users MTC user • At least J jobs inB bags-of-tasks Trace-based analysis • 6 grid traces, 4 parallel productionenvironment traces • Various criteria (combinations of values for J and B) Results • “number of BoTs submitted 1,000 & number of tasks submitted 10,000” • Easy to grasp + Dominate most traces (jobs and CPUTime) + 1-CPU jobs
Agenda • Introduction & Motivation • Proto-Many Task Users • Performance Evaluation of Four Clouds • Experimental Setup • Selected Results • Clouds vs Other Environments • Take Home Message
Experimental SetupEnvironments Four commercial IaaS clouds (NIST definitions) • Amazon EC2 • GoGrid • Elastic Hosts • Mosso No Cluster instances(not releasedin Dec’08-Jan’09)
Experimental SetupExperiment Design Principles • Use complete test suites • Repeat 10 times • Use defaults, not tuning • Use common benchmarks Compare results with results for other systems Types of experiments • Resource acquisition and release • Single-Instance (SI) benchmarking • Multiple-Instance (MI) benchmarking
Resource Acquisition: Can Matter • Can be significant • For single instances (GoGrid) • For multiple instances (all) • Short-term variability can be high (GoGrid) • Slow long-term growth
Single Instances: ComputePerformance Lower Than Expected • ECU = 4.4 GFLOPS (at 100% efficient code) = 1.1GHz 2007 Opteron x 4 FLOPS/cycle (full pipeline) • In our tests: 0.6-0.8 GFLOPS • Sharing of the same physical machines (working set) • Lack of code optimizations beyond –O3 –funroll-loops • Metering requires more clarification • Instances with excellent float/double addition perf. may have poor multiplication perf. (c1.medium, c1.xlarge)
Multi-Instance: Low Efficiency in HPL Peak Performance • 2 x c1.xlarge (16 cores) @ 176 GFLOPS, HPCC-227 (Cisco, 16c) @ 102, HPCC-286 (Intel, 16c) @ 180 • 16 x c1.xlarge (128 cores) @ 1,408 GFLOPS, HPCC-224 (Cisco, 128c) @ 819, HPCC-289 (Intel, 128c) @ 1,433 Efficiency • Cloud: 15-50% even for small (<128)instance counts • HPC: 60-70%
Cloud Performance Variability • Performance variability of production cloud services • Infrastructure: Amazon Web Services • Platform: Google App Engine • Year-long performance information for nine services • Finding: about half of the cloud services investigated in this work exhibits yearly and daily patterns; impact of performance variability depends on application. Amazon S3: GET US HI operations A. Iosup, N. Yigitbasi, and D. Epema, On the Performance Variability of Production Cloud Services, (under submission).
Agenda • Introduction & Motivation • Proto-Many Task Users • Performance Evaluation of Four Clouds • Clouds vs Other Environments • Take Home Message
Clouds vs Other Environments • Trace-based simulation, DGSim (grid) simulator • Compute-intensive, no data IO • Source Env v Cloud w/ source-like performance v Cloud w/ real (measured) performance • Slowdown for Sequential: 7 times, Parallel: 1-10 times • Results • Response time 4-10 times higher in real clouds • Good for short-term, deadline-driven projects
Take Home Message • Many-Tasks Scientific Computing • Quantitative definition: J jobs and B bags-of-tasks • Extracted proto-MT users from grid and parallel prod. envs. • Performance Evaluation of Four Commercial Clouds • Amazon EC2, GoGrid, Elastic Hosts, Mosso • Resource acquisition, Single- and Multi-Instance benchmarking • Low compute and networking performance • Clouds vs Other Environments • An order of magnitude better performance needed for clouds • Clouds already good for short-term, deadline-driven sci. comp.
Potential for Collaboration • Other performance evaluation studies of clouds • The new Amazon EC2 instance—Cluster Compute • Other clouds? • Data-intensive benchmarks • General logs • Failure Trace Archive • Grid Workloads Archive • …
Thank you! Questions? Observations? email: A.Iosup@tudelft.nl • More Information: • The Grid Workloads Archive: gwa.ewi.tudelft.nl • The Failure Trace Archive: fta.inria.fr • The GrenchMark perf. eval. tool: grenchmark.st.ewi.tudelft.nl • Cloud research: www.st.ewi.tudelft.nl/~iosup/research_cloud.html • see PDS publication database at: www.pds.twi.tudelft.nl/ Big thanks to our collaborators: U. Wisc.-Madison, U Chicago, U Dortmund, U Innsbruck, LRI/INRIA Paris, INRIA Grenoble, U Leiden, Politehnica University of Bucharest, Technion, …