1 / 17

Intelligent Resource Prediction for OpenFlow -based Datacenter Testbed .

Intelligent Resource Prediction for OpenFlow -based Datacenter Testbed . 指導教授:王國禎 學生:洪維藩 國立交通大學資訊科學與工程研究所 行動計算與寬頻網路實驗室. Introduction. What is resource prediction: Predict the future resources required to under the conditions of the Service Level Agreements (SLA )

gannon
Download Presentation

Intelligent Resource Prediction for OpenFlow -based Datacenter Testbed .

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Intelligent Resource Prediction for OpenFlow-based Datacenter Testbed. 指導教授:王國禎 學生:洪維藩 國立交通大學資訊科學與工程研究所 行動計算與寬頻網路實驗室

  2. Introduction • What is resource prediction: • Predict the future resources required to under the conditions of the Service Level Agreements (SLA) • Resource prediction objective: • Power-saving • Efficient allocation of resources • Immediacy supply services

  3. Relative work • [1] present neural network and regression methods for predicting future workload in the Grid or Cloud platform.

  4. Relative work (Cont.) • [2] propose a fast analytical predictor and an adaptive machine learning based predictor, and also show how a deadline scheduler could use these predictions to help providers to make the most of their resources. • Predicting • Trefis a reference of the execution time of the job weighted • If Deadline > Now, %CPU is a positive number. Then, the deadline potentially can be accomplished if %CPU 100.

  5. Relative work (Cont.) • [2] and relative literature predict the server's CPU and memory usage; this way will lead to the two questions are as follows: • Only know that the server is heavily loaded • The validity problem of historical data

  6. Relative work (Cont.)

  7. Relative work (Cont.) • [3] The main contributions of this paper are: • Dynamic Virtual Machine (VM) provisioning manager capable of balancing application SLA compliance with energy consumption • Dynamic VM placement manager which consolidates VMs on the minimum number • Two-level resource management middleware framework

  8. Proposed method

  9. Proposed method • We propose a method to predict the application a total amount of resources required • When the application is first run, step1: • Execution of CPU usage and memory usage to adjust the number of virtual machines • The accumulated CPU, memory, GPU, hard drive I / O utilization rate and the time information, training input to the neural network

  10. Proposed method (cont.) • When the neural network training is completed, step2: • Neural network predicted future resource requirements(CPU, memory, GPU, hard disk I / O utilization rate and the time information), and allocate the appropriate virtual machine to supply application services

  11. Artificial Neural Network

  12. Proposed method (cont.) • Reason of the time information input to the neural network training: • In the Internet environment, the number of users at different times will vary (For example, Online game players are mostly in the non-working hours) • Time information as a reference for training the neural network, allowed to predict more accurately

  13. Resource allocation • Memory requirements, for example: • Service the application requires 2GB of memory • Provide two virtual machines with 1GB memory

  14. OpenFlowMininet Simulator • Mininet creates scalable (up to hundreds of nodes, depending on your configuration) software-defined (e.g. OpenFlow) networks on a single PC by using Linux processes in network namespaces.

  15. References • [1] Miskhat, S.F., et al, “Neural network and regression based processor load prediction for efficient scaling of grid and cloud resources” in Proc. 14th International Conference on Computer and Information Technology (ICCIT), Dec. 2011, pp. 333-338. • [2] Reig, et al., “Prediction of job resource requirements for deadline schedulers to manage high-level SLAs on the cloud,” in Proc. 9th IEEE International Symposium on Network Computing and Applications (NCA), July 2010, pp. 162-167.

  16. References • [3] Tran, et al., “Performance and Power Management for Cloud Infrastructures” In Cloud Computing (CLOUD), 2010 IEEE 3rd International Conference on, 5-10 July 2010, pp 329 - 336

  17. Thank you!

More Related