230 likes | 309 Views
Proactive Prediction Models for Web Application Resource Provisioning in the Cloud _______________________________ Samuel A. Ajila & Bankole A. Akindele. Presentation Outline. Introduction to Problem Area Motivation Goals and Scope Contributions Related work
E N D
Proactive Prediction Models for Web Application Resource Provisioning in the Cloud _______________________________ Samuel A. Ajila & Bankole A. Akindele
Presentation Outline • Introduction to Problem Area • Motivation • Goals and Scope • Contributions • Related work • Machine learning algorithms • Implementation setup • Evaluation metrics • Selected results • Conclusion
Introduction • Cloud computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the data centers that provide those services (SaaS, PaaS and IaaS) • Challenges include: Data security threats, performance unpredictability and prompt (quick) resource scaling • Accurate Virtual machine (VM) resource provisioning: • under and over-provisioning • Current techniques like Control theory, Constraint programming and Machine learning
Motivation • VM instantiation (scaling) takes time and ranges from 5 to 12 minutes • Challenges that can result from the instantiation duration: • Possibility of Service Level Agreement (SLA) violations - Cloud providers • Poor customer’s Quality of Experience (QoE) – Cloud client • Reputation loss – Cloud client/providers • Presently, monitoring metrics made available to clients are limited to CPU, Memory and Network utilization • The motivation is to “predict resource usage so that cloud providers can make adequate provisioning ahead of time” • To extend the monitoring metrics by including Response time and Throughput
Goals and Scope • Design and develop a cloud client prediction model for cloud resource provisioning in a Multitier web application environment • The model would be capable of forecasting future resource usage to enable timely VM provisioning • To achieve this goal, SVM, NN and LR learning techniques are analysed using the Java implementation of TPC-W (workload) • The scope of this work is limited to IaaS • Prediction model is built around the web server tier. It ispossible to extend to other tiers
Contributions • Design and development of a cloud client prediction model that uses historical data to forecast future resource usage • The evaluation of the resource usage prediction capability of SVM, NN and LR using three benchmark workloads from TPC-W • The extension of the prediction model to include Throughput and Response time, thus providing wider and better scaling decision options for cloud clients • The comparison of the prediction capability of SVM, NN and LR models under random and steady traffic patterns
Related works Table 1: Auto-scaling techniques
Machine learning algorithms Linear Regression • The linear regression model has the form: (1) • is the input data • ’s are the unknown parameters estimated from the training data. Estimation is done using the least squares method. • The coefficients are picked to minimize the residual (difference between the actual and predicted value) sum of squares : is the actual value
Machine learning algorithms (cont’d) Neural Network • A two-stage regression or classification model represented by a network diagram • Derived features are created from linear combinations of the input after which the target is modeled as a linear combination of • Like Linear regression, unknown parameters called weights are sought to make the model fit the training data well Figure 1 : Single hidden layer, feed forward neural network
Machine learning algorithms (cont’d) Support Vector Regression • The goal of SVR is to find a function that has at most (the precision by which the function is to be approximated) deviation from the actual obtained target for all training data • Mathematically, • Input data is mapped to higher dimensional feature space via the kernel function, then a linear regression is performed • Goal is to find optimal weights and threshold
Architecture Figure 2 Implementation architecture
Implementation setup (cont’d) • Total length of experiment was about 10 hours • Selected experimental workload mix
Evaluation metrics • Evaluation is done on the 60% training and 40% held out test dataset • The held out dataset is used to forecast to a maximum interval of 12 minutes (VM instantiation time reported by other authors)
Selected results (cont’d) CPU utilization test performance metric Figure 5 CPU Utilization Actual and Predicted test model results
Selected results (cont’d) Throughput test performance metric Figure 6 Throughput Actual and Predicted test model results
Selected results (cont’d) Response time test performance metric Figure 7 Response time Actual and Predicted test model results
Conclusion • SVR displayed superior prediction accuracy over both LR and NN in a typically nonlinear, not defined a-priori workload by: • In the CPU utilization prediction model, SVR outperformed LR and NN by 58% and 120% respectively • For the Throughput prediction model, SVR again outperformed LR and NN by 12% and 76% respectively; and finally, • The Response time prediction model saw SVR outperforming LR and NN by 26% and 80% respectively. • Based on this experimental results SVR may be accepted as the best prediction model in a nonlinear, not defined a-priori system
Future works • SVR and other machine learning algorithms are good for forecasting, however • training and retraining is a challenge • Parameter selection is still empirical • Combination of SVR and other predicting techniques may mitigate this challenge • Other future direction include • Inclusion of database tier for a more robust scaling decision • Resource prediction on other non-web application workload
Questions Thank You