1 / 27

Thermal Aware Workload Scheduling with Backfilling for Green Data Centers

Thermal Aware Workload Scheduling with Backfilling for Green Data Centers. Lizhe Wang, Gregor von Laszewski , Jai Dayal , Thomas R. Furlani RIT . IU. UB. Outline. Background and related work Models Research problem definition Scheduling algorithm Performance study Conclusion .

danil
Download Presentation

Thermal Aware Workload Scheduling with Backfilling for Green Data Centers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Thermal Aware Workload Scheduling with Backfilling for Green Data Centers Lizhe Wang, Gregor von Laszewski, Jai Dayal, Thomas R. Furlani RIT . IU. UB

  2. Outline • Background and related work • Models • Research problem definition • Scheduling algorithm • Performance study • Conclusion

  3. Context Cyberaide A project that aims to make advanced cyberinfrastructure easier to use GreenIT & Cyberaide How do we use advanced cyberinfrastructure in an efficient way Future Grid A newly NSF funded project to provide a testbed that integrates the ability of dynamic provisioning of resources. (Geoffrey C. Fox is PI) GPGPU’s Application use of special purpose hardware as part of the cyberinfrastructure

  4. FutureGrid • The goal of FutureGrid is to support the research that will invent the future of distributed, grid, and cloud computing. • FutureGrid will build a robustly managed simulation environment or testbed to support the development and early use in science of new technologies at all levels of the software stack: from networking to middleware to scientific applications. • The environment will mimic TeraGrid and/or general parallel and distributed systems • This test-bed will enable dramatic advances in science and engineering through collaborative evolution of science applications and related software.

  5. Other Participant Sites University of Virginia (UV) Technical University Dresden GWT-TUD GmbH, Germany University of Tennessee – Knoxville (UTK)

  6. FutureGrid Hardware

  7. FutureGrid Partners • Indiana University • Purdue University • San Diego Supercomputer Center at University of California San Diego • University of Chicago/Argonne National Labs • University of Florida • University of Southern California Information Sciences Institute, University of Tennessee Knoxville • University of Texas at Austin/Texas Advanced Computing Center • University of Virginia • Center for Information Services and GWT-TUD from Technische Universtität Dresden.

  8. Green computing • a study and practice of using computing resources in an efficient manner such that its impact on the environment is as less hazardous as possible. • least amount of hazardous materials are used • computing resources are used efficiently in terms of energy and to promote recyclability

  9. Cyberaide Project • A middleware for Clusters, Grids and Clouds • A collaboration between IU, RIT, KIT, … • Project led by Dr. Gregor von Laszewski

  10. Objective • Towards next generation cyberinfrastructure • Middleware for data centers, grids and clouds • Environment respect • To reduce temperatures of computing resources in a data center, thus reduce cooling system cost and improve system reliability • Methodology: thermal aware workload distribution

  11. Model • Data center • Node: <x,y,z>, ta, Temp(t) • TherMap: Temp(<x,y,z>,t) • Workload • Job ={jobj}, jobj=(p,tarrive,tstart,treq,Δtemp(t))

  12. Thermal model Nodei.Temp(t) Nodei.Temp(t) task-temperature profile nodei <x,y,z> Nodei.Temp(t) ambient temperature: TherMap=Temp(Nodei.<x,y,z>,t) P C R Online task-temperature Nodei.Temp(0) Temp(Nodei.<x,y,z>,t) RC-thermal model t PR+ Temp(Nodei.<x,y,z>,t)

  13. Research issue definition • Given a data center, workload, maximum temperature permitted of the data center • Min Tresponse • Min Temperature

  14. Conceptframework Data center model Workload model Workload placement input schedule TASA-B input input Cooling system control online task-temperature

  15. Conceptframework Data center model Workload model Workload placement input schedule TASA-B input input Cooling system control RC-thermal model online task-temperature calculation Thermal map task-temperature profile

  16. Conceptframework Data center model Workload model Workload placement input schedule TASA-B input input Cooling system control RC-thermal model online task-temperature calculation Thermal map task-temperature profile Control

  17. Conceptframework Data center model Workload model Workload placement input schedule TASA-B input input Cooling system control RC-thermal model online task-temperature calculation Thermal map task-temperature profile Control profiling Profiling tool

  18. Conceptframework Data center model Workload model Workload placement input schedule TASA-B input input Cooling system control RC-thermal model online task-temperature calculation Thermal map task-temperature profile Control profiling provide information Calculate thermal map Profiling tool monitoring service CFD model

  19. Scheduling framework Jobs Job queue Job submission Data center Job scheduling TASA-B Rack Update data center Information periodically

  20. Task scheduling algorithm with backfilling (TASA-B) • Sort all jobs with decreased order of task-temperature profile • Sort all resource with increased order of predicted temperature • Hot jobs are allocated to cool resources • Predict resource temperature based on online-task temperature • Backfill possible jobs

  21. Backfilling nodek.tbfend, end time for backfilling Time backfilling holes Available time t0 nodemax1 nodemax2 Node nodek.tbfsta, backfilling start time of nodek

  22. Backfilling nodek.Tempbfend, end temperature for backfilling Temperature Temperature backfilling holes Tempbfmax nodemax2 nodemax1 Node nodek.Tempbfsta, start temperature for backfilling of nodek

  23. Simulation • Data center: • Computational Center for Research at UB • Dell x86 64 Linux cluster consisting 1056 nodes • 13 Tflop/s • Workload: • 20 Feb 2009 – 22 Mar. 2009 • 22385 jobs

  24. Simulation result

  25. Simulation result

  26. Our work on Green data center computing • Power aware virtual machine scheduling (cluster’09) • Power aware parallel task scheduling (submitted) • TASA (i-SPAN’09) • TASA-B (ipccc’09) • ANN based temperature prediction and task scheduling (submitted)

  27. Final remark • Green computing • Thermal aware data center computing • TASA-B • Justification with a simulation study

More Related