310 likes | 496 Views
Chapter 14 How to Test the Readiness of Open Source Cloud Computing Solutions. Qunying Huang, Jizhe Xia, Min Sun, Kai Liu, Jing Li, Zhipeng Gui , Chen Xu , and Chaowei Yang. Learning Objectives. How to test the readiness of open-source cloud solutions. Learning Materials. Videos:
E N D
Chapter 14 How to Test the Readiness of Open Source Cloud Computing Solutions Qunying Huang, Jizhe Xia, Min Sun, Kai Liu, Jing Li, ZhipengGui, Chen Xu, and Chaowei Yang
Learning Objectives • How to test the readiness of open-source cloud solutions
Learning Materials • Videos: • Chapter_14-Video_1.wmv • Chapter_14-Video_2.wmv • Chapter_14-Video_3.wmv • Chapter_14-Video_4.wmv • Chapter_14-Video_5.mp4 • Scripts, Files and others: • ubench-patch.txt
Learning Modules • Open-source cloud solutions • Test environment • Cloud readiness test • Tests of cloud operations • Tests of virtual computing resource • Tests of general applications • Cloud Readiness Test for GEOSS Clearinghouse • Cloud Readiness Test for Dust Storm Forecasting • Conclusion and discussions
Open-source cloud solutions Table 14.1 Customer and evaluations of the software solutions
Open-source cloud solutions Table 14.2 The general characteristics of the selected solutions
Learning Modules • Open-source cloud solutions • Test environment • Cloud readiness test • Tests of cloud operations • Tests of virtual computing resource • Tests of general applications • Cloud Readiness Test for GEOSS Clearinghouse • Cloud Readiness Test for Dust Storm Forecasting • Conclusion and discussions
Test Environment Figure 14.1.The open source solution test environment Figure 14.2. Conceptual cloud architecture
Tests of cloud operations • Single resource allocation and release performance are tested. • The startup and release of resources, including VM and storage volume are tested. • Each operation test is repeated 5 times for each instance type with a resource acquisition followed by a release. • Time costs for starting, pausing, restarting, and deleting a VM with different virtual computing resources are recorded and compared.
Tests of virtual computing resource • CPU • Video: Chapter_12_Video_1.mp4 • I/O performance • Video: Chapter_12_Video_2.mp4 • Memory hierarchy performance • Video: Chapter_12_Video_3.mp4 • Networking performance • Video: Chapter_12_Video_4.mp4
Learning Modules • Open-source cloud solutions • Test environment • Cloud readiness test • Tests of cloud operations • Tests of virtual computing resource • Tests of general applications • Cloud Readiness Test for GEOSS Clearinghouse • Cloud Readiness Test for Dust Storm Forecasting • Conclusion and discussions
Virtual Computing ResourceTest Benchmarkers Table 14.4. General benchmarkers for testing the performance of VMs
Virtual Computing ResourceTest Workflow Figure 14.3. Virtual computing resource benchmark workflow Step 1: Building an image with different virtual computing resource benchmark software package installed for each open-source solution. To build an image, a VM needs to be launched as configured in Table 14.3 for further customization. The virtual computing resource benchmark software package should be installed on the basic VM.
Virtual Computing ResourceTest Workflow Figure 14.3. Virtual computing resource benchmark workflow Step 2: Start different types of VMs. The five VM types (Table 14.3) can be launched from the images built in Step 1 on each solution. Step 3: Run the test. This step requires creating the script to run each benchmarker sequentially. Typically, each benchmarker will be repeated three times, and wait for 60 seconds between each test. Step 4: Collect and analyze the results. When the benchmarking is completed, check the log files (e.g., ubench.log [Box 14.5]) for the benchmarking result.
Virtual Computing ResourceTest Result Figure 14.4. UBench output Figure 14.5 A Bonnie++ benchmarking report sample Figure 14.6. Cachebench output Figure 14.7. Iperf output
Tests of general applications • JRE • SQL
General Application Test Benchmarkers • Video: Chapter_12_Video_5.mp4 • JRE performance • SPECjvm2008 contains several real-world applications and benchmarks focusing on java functionality. SPECjvm2008 focuses on the performance benchmark of JRE in a single machine. • MySQL performance • MySQL-Bench : MySQL-Bench is used to test the performance of MySQL database. It can test various types of database operations such as creating tables, inserting and deleting records and querying on tables, which ensures a comprehensive test on the capabilities of MySQL database supported by different solutions.
General Application Test Workflow Figure 14.3. benchmark workflow Step 2: Start different types of VMs on different solutions. The five VM types (Table 14.3) can be launched from the images built in step 1 on each solution. Step 3: Run the test. Step 4: Collect and analyze the results. The log file (SPECjvm2008.log) would contain the three times test results for JRE test. The log file “sql-bench.log” would contain the test results for SQL-Bench test.
General Application Test Result Figure 14.9. A sample test result of SPECjvm2008 Figure 14.10. Sample test result
General Application Test Workflow Figure 14.3. benchmark workflow Step 1: Build images with JRE benchmark software package or MySQl-Bench installed on each cloud platform.
Learning Modules • Open-source cloud solutions • Test environment • Cloud readiness test • Tests of cloud operations • Tests of virtual computing resource • Tests of general applications • Cloud Readiness Test for GEOSS Clearinghouse • Cloud Readiness Test for Dust Storm Forecasting • Conclusion and discussions
GEOSS Clearinghouse Test Design Figure 14.11. Matrix test of GEOSS Clearinghouse
GEOSS Clearinghouse Test Workflow 1. Start GEOSS Clearinghouse Instance 2. Set up Load Balancing or Auto-Scaling Figure 12.5 Workflow of Load balancing and auto-scaling test 3. Set up CSW GetRecords Requests 4. Set up JMeter Test Plan 5. Run JMeter 6. Analyze Jmeter Results
GEOSS Clearinghouse Test Result Figure 12.6 Test results of GEOSS TestPlan_1
Dust Storm Test Design • HPC Vs. Clouds : The virtual cluster built from VMs is compared to traditional clusters to quantify the overhead of transforming physical infrastructure into clouds. The result of this experiment would indicate how well the solutions support large scale scientific computing. Within this experiment, different numbers of virtualized (from one to four VMs) and non-virtualized computing resources are compared to investigate the impact of virtualized computing power, storage and networking. • Open-source Solution Comparison: This experiment tests the capability of different solutions in supporting the computing- and communication-intensive applications with different numbers of VMs on the physical machines and three cloud solutions respectively. The performance results indicate the relative performance of these cloud solutions for scientific computing. • Virtualization Technology: This experiment compares the performance of the same amount of computing resources virtualized by KVM and Xen. In order to exclude the impact of cloud computing solution, the resources virtualized by Xen and KVM are managed by the same cloud solution.
Dust Storm Test Workflow 10. Recycle VMs Build up images for cloud platforms and set up dust storm environment on local cluster 9. Add four more VMs for EC2 and Nebula platforms, and use eight nodes to local cluster to test the performance by repeating step 6 and 7 2. Prepare the testing script 8. Add two more VMs to EC2 and Nebula platforms, and use four nodes to local cluster to test the performance by repeating Step 6 and 7 3. Start up the same number of VMs on EC2 and Nebula (e.g., 2VMs ) 7. Check and analzye testing results 4.Transfer the model source code and data to the master node of EC2, Nebula and local cluster 6. Run the testing script on each platform 5. Transfer the testing script to the master node of EC2, Nebula and local cluster Figure 12.10 The workflow of testing the performance of three different platforms with 2, 4, and 8 VMs using dust storm model
Dust Storm Test Result ****** Test type EC2.2VMs********* ************Begin to run**************** /mnt/mirror/performancetest/test.1/nmmgmu3km.iop.0/model.parallel.8/exe Fri Nov 11 20:58:57 UTC 2011 **************** after 128 time is************* Fri Nov 11 21:25:18 UTC 2011 **************** after 120 time is************* Fri Nov 11 21:50:14 UTC 2011 **************** after 112 time is************* Fri Nov 11 22:13:45 UTC 2011 **************** after 104 time is************* Fri Nov 11 22:37:07 UTC 2011 **************** after 96 time is************* Fri Nov 11 22:59:59 UTC 2011 **************** after 80 time is************* Fri Nov 11 23:22:04 UTC 2011 **************** after 64 time is************* Fri Nov 11 23:44:44 UTC 2011 **************** after 48 time is************* Sat Nov 12 00:11:35 UTC 2011 **************** after 32 time is************* Sat Nov 12 00:39:50 UTC 2011 **************** after 16 time is************* Sat Nov 12 01:14:56 UTC 2011 Finish tar begin copy data Finish copy data Wed May 23 19:48:13 EDT 2012 ***** finish********** Figure 12.11 Model test output on EC2 with two instances and different process numbers
Dust Storm Test Result Figure 12.12 Dust storm model performances with different platforms and process numbers
Discussion questions • What are the aspects that should be considered to test the virtual computing resources? • Please enumerate the tools that can be used to test the virtual computing resources? • What is the general workflow of testing the virtual cloud computing resources? • How to test the open source solutions with general applications? • What are the aspects that should be considered when testing the open source solutions? • How to test the capability of an open-source solution in supporting concurrent intensity? • How to test the capability of an open-source solution in supporting computing intensity? • Read the results paper (Huang et al., 2013) and describe the results in 500 words. Discuss the dynamics of the results, i.e., how the results may change?
Reference • Barham, P., Dragovic, B., Fraser, K., Hand, S., Harris, T., Ho, A., Neugebauer, R., Pratt, I., Warfield, A., 2003. Xen and the art of virtualization. In: Proceedings of the 19th ACM Symposium on Operating Systems Principles, Bolton Landing, New York, NY, pp.164-177, http://doi.acm.org/10.1145/945445.945462. • Cachebench, 2012. http://www.gnutoolbox.com/linux-benchmark-tools/?page=detail&get_id=34&category=10. • CloudStack, 2012. http://CloudStack.org/. • Coker, R., 2008. http://www. coker.com.au/bonnie+. • Huang Q., Yang C., Liu K., Xia J., Xu C., Li J., Gui Z., Sun M., Li Z., 2013. Comparing Open Source Cloud Computing Solutions for Geosciences, Computers & Geosciences. • KVM, 2010.Kernel-based Virtual Machine. http://www.linuxkvm.org. • Mucci, P., 2012. Llcbench (low-level characterization benchmarks). http://icl.cs.utk.edu/projects/llcbench/ . • Tirumala, A., Qin, F., Dugan, J., Ferguson, J., Gibbs, K., 2012. Iperf: The TCP/UDP bandwidth measurement tool. http://iperf.sourceforge.net/. • UBench, 2012. http://www.tucows.com/preview/69604/Ubench.