380 likes | 577 Views
Cost-effective Cloud HPC Resource Provisioning by Building Semi-Elastic Virtual Clusters. Shuangcheng Niu 1 , Jidong Zhai 1 , Xiaosong Ma 2,3 Xiongchao Tang 1 , Wenguang Chen 1 THU 1 & NCSU 2 & ORNL 3. “HPC in Cloud” Is Trend?. HPC in cloud On-demand Elastic No upfront cost
E N D
Cost-effective Cloud HPC Resource Provisioning by Building Semi-Elastic Virtual Clusters Shuangcheng Niu1, Jidong Zhai1, Xiaosong Ma2,3 Xiongchao Tang1, Wenguang Chen1 THU1 & NCSU2 & ORNL3
“HPC in Cloud” Is Trend? • HPC in cloud • On-demand • Elastic • No upfront cost • Saving management fee • … • More and more engineers start using HPC cloud
“On-demand Model” Is Effective? • Reserved instance pricing model • 6 reserved instance classes in Amazon EC2 CCI • Discounted charge rate with upfront fee 38.3% 6.8%
“On-demand Model” Is Lower Utilized! • Reserved instance pricing model • Difficult to be utilized for individuals SDSC Data Star system trace • 391 day • 460 users • 1 user, 1 3Y-Light
Short Jobs • Hourly-charging granularity • Several minutes delay when start Maybe I should pack my short jobs to lower my rental cost. 70%
Our Proposal • Semi-Elastic Cluster computing model • Organization-owned • Cloud-based virtual cluster • Dynamic capacity • Sharing resources between users
SEC Model Wait time: 15 min Utilization: 56.7% • Traditional local cluster
SEC Model Wait time: 15 min Utilization: 56.7% • Traditional local cluster • Pure on-demand cloud Wait time: 0 min Utilization: 70.8%
SEC Model Wait time: 15 min Utilization: 56.7% • Traditional local cluster • Pure on-demand cloud • Semi-elastic cluster Wait time: 0 min Utilization: 70.8% Wait time: 0 min Utilization: 77.3%
Aggregated Workloads SEC trace slices with SDSC Data Star workload On-Demand, 10.59 % 3Y-Light,15.75 % 3Y-Medium,73.66 %
SEC Challenges • Finer-tuned capacity • Intelligently controlled capacity according to job queue and submission history • Tradeoff between responsiveness and lower cost • Aggregated workloads • Predict long-term resource requirements • Auto resource provisioning • Evaluation without real traces
Job Scheduling & Cluster Size Scaling • Problem definition • Configurable wait time constraint • Minimize total cost • Batch scheduling • Extended backfilling algorithms • Dynamic resource provisioning • Resource provisioning strategies • Wait-time bounded instance acquisition • Expanding capacity according to job queue • Job placement policies
Experimental Setup • Workload • 391-day trace from SDSC’s Data Star system • Cloud platform • Amazon's EC2 Cluster Compute Instances (CCIs) • Eight Extra Large Instances (cc2.8xlarge) • 16 processors (2 × Intel Xeon E5-2670, eight-core) • 60.5 GB memory • 4 × 850 GB instance storage
SEC vs. On-demand Model • Individual • NoWait • SEC-On-Demand • SEC-Hybrid Trace: SDSC DS 13.3% 61.0%
SEC vs. Local Cluster • Traditional local cluster • SEC-Hybrid Trace: SDSC DS
Offline Reserved Instance Configuration • Offline configuration problem • Input • Utilization matrix Un×m (from given cluster capacity trace) • Pricing classes {C0, C1, C2,…Ch} • Solution • Purchased instance matrix: Rn×m, where Ri,k≥0 • Optimization • Minimizing total rental cost A hard problem!
Offline ForwardGreedy Algorithm • Choosing larger time interval, e.g. a week • Reduce computation granularity Running: At beginning of each time interval Steps: 1) Calculate all instances' utilization level based on given future demands 2) Identify first economical class for each instance 3) Summarize provisioning plan 4) Compare provisioning plan with current inventory and decide amount of purchased 5) Adjusting active reserved instances
Offline Optimal-Competitive Algorithm Transform the original pricing classes into new classes TotalCost (Ck) ≥ TotalCost(Ck’) =
Online Reserved Instance Configuration • Use weekly time intervals • Reduce computation complexity • Reduce short-term variance • Less impact on long-term reservation decisions • Evolution model • Assumed a quadratic polynomial model
Long-Term Demand Prediction • Classical Exponential Smoothing (ES) method • Relatively simple • Quite robust for processing non-stationary noises • Widely used • Our prediction method • Extended Holt's double-parameter ES method • Auto adjusting smoothing factors
Verifying Workloads • Validation workloads
SNS-based Synthetic Workloads Search Traffic Active Users SNS Active Users Resource Demand HPC SNS search traffic HPC trace slices Synthetic Workload Generation Synthetic workload
Reserved Instance Configuration Analysis • HPC trace
Reserved Instance Configuration Analysis • Synthetic workloads using SNS trace
Overhead Analysis with SEC Prototype • Overhead for data protection with instance reuse • Reformatting EC2 ephemeral 4×845GB disks • 3.4 seconds • Configuration overhead when requesting new instances • Configuring host names, hosts file, file system, etc. • About 8.0 seconds • Configuration overhead when releasing instances • About 5.0 seconds
Conclusion SEC: A new execution model for HPC • Organization-owned dynamic cloud-based clusters • Reduced costs by workload aggregations • Better responsiveness through instance reuse • Higher utilization level by efficient utilizing residual resources SEC can potentially become a viable alternative to organizations owning and managing physical clusters
Related Work [1] Parallel Workloads Archive. http://www.cs.huji.ac.il/labs/parallel/workload/, 2012. [2] SLURM: A Highly Scalable Resource Manager. https://computing.llnl.gov/linux/slurm/, 2012. [3] StarCluster. http://web.mit.edu/star/cluster/, 2012. [4] Google Trends. http://www.google.com/trends/, 2013. [5] E. S. Gardner Jr. Exponential smoothing: The state of the art. Journal of Forecasting, 1985. [6] W. Voorsluys, S. Garg, and R. Buyya. Provisioning spot market cloud resources to create cost-effective virtual clusters. Algorithms and Architectures for Parallel Processing, 2011. [7] H. Zhao, M. Pan, X. Liu, X. Li, and Y. Fang. Optimal resource rental planning for elastic applications in cloud market. In Parallel & Distributed Processing Symposium (IPDPS), IEEE, 2012.
Acknowledgments We would thanks to • HPC Workloads archive • Anonymous reviewers and shepherd • Research grants from Chinese 863 project, NSF grants, a joint faculty appointment between ORNL and NCSU, and a senior visiting scholarship at Tsinghua University
Classical HPC traces • SDSC’s Data Star, • SDSC's Blue Horizon (SDSC Blue), • SDSC's IBM SP2 (SDSC SP2), • Cornell Theory Center IBM SP2 (CTC SP2), • High Performance Computing Center North (HPC2N), • Sandia Ross cluster(Sandia Ross). Variance in node-hour per active user
Synthesis workloads SNS search trace from Google Trends
Cost-responsiveness analysis Local cluster expense items
Impact of scheduling parameters Average wait time Expanding strategies Wait Time Threshold
Impact of scheduling parameters Average charge rate Expanding strategies Wait Time Threshold
Overhead Analysis with SEC Prototype • Overhead for data protection with instance reuse • Reformatting EC2 ephemeral 4×845GB disks • 3.4 seconds • Configuration overhead when requesting new instances • Configuring host names, hosts file, and the file system • Set up user accounts and add nodes to the SLURM partition. • Configuration overhead when releasing instances