1 / 24

Enhancing Both Fairness and Performance Using Rate-Aware Dynamic Storage Cache Partition

Enhancing Both Fairness and Performance Using Rate-Aware Dynamic Storage Cache Partition. Yong Li, Dan Feng and Zhan Shi School of Computer Wuhan National Laboratory for Optoelectronics Huazhong University of Science and Technology. DISCS-2013 – November 18, 2013. TOC.

geri
Download Presentation

Enhancing Both Fairness and Performance Using Rate-Aware Dynamic Storage Cache Partition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Enhancing Both Fairness and Performance Using Rate-Aware Dynamic Storage Cache Partition Yong Li, Dan Feng and Zhan Shi School of Computer Wuhan National Laboratory for Optoelectronics Huazhong University of Science and Technology DISCS-2013 – November 18, 2013

  2. TOC • Background & Motivation • Solution • Experimental Evaluation • Conclusions

  3. Background • Data center is becoming increasingly consolidated • What the consolidation gives for us • Reduce the cost of storage management • Avoid low storage utilization • Achieve better data sharing • The affect of the consolidation • Exist a competition for resources among the concurrently executing applications, such as cache

  4. Motivation Example • Two applications with diverse access rates • App A: access rate = 1 Blocks/s • App B: access rate = 4 Blocks/s • The cache size is 6 Blocks • Using LRU replacement scheme

  5. Running Alone • Hit ratio of application A: 25%

  6. Running Alone • Hit ratio of application A: 25% • Hit ratio of application B: 20%

  7. Running Concurrently • Hit ratio of application A: 0% • Hit ratio of application B: 20%

  8. Motivation experiment • Iozone: 11.9%~11.6% • TPC-C: 31.1%~20.1%

  9. TOC • Background & Motivation • Solution • Experimental Evaluation • Conclusions

  10. Overview of Mechanism • Partition-based scheme • Explicitly divide a cache into several partitions, one for each application • Three modules in scheduler • Utility predictor • Candidate finder • Segment dispatcher

  11. Analysis of Utility • Utility on Fairness • Use the maximum slowdown metric to evaluate fairness • Utility on Performance • refers to the change in overall performance by adding or shrinking a single cache block

  12. Victim partition selection • How to select victim partition • The 1th principle: allocation must satisfy fairness goal • We specify the goal of fairness by setting an variable SDU • We want the maximum slowdown can be less than this value • The 2th principle: performance degradation acceptable • We set an lower bound for △P, denote as △PL • If the performance degradation exceed the △PL, the allocation will be considered as excessive, and should be avoided.

  13. TOC • Background & Motivation • Solution • Experimental Evaluation • Conclusions

  14. Experimental Evaluation • Enhanced the DULO cache simulator • Used the DULO together with the Disksim simulator Table 1: the parameter of disksim simulator

  15. Workloads • We use several benchmarks Table 2: the workloads

  16. Fairness

  17. Performance

  18. Impact of mixed access rate

  19. Impact of changing access rate

  20. Impact of changing cache size

  21. Impact of varying cache size

  22. TOC • Background & Motivation • Solution • Experimental Evaluation • Conclusions

  23. Conclusion • This paper focuses on how to solve the unfair allocation among multiple heterogeneous applications. • We propose an novel adaptive cache management algorithm, which aim to guarantee the fair allocation of cache space among heterogeneous applications, while maximizing the overall performance. • Demonstrate the performance and fairness under a series of wide ranging experiments

  24. Thank you! li.yong.xyz@gmail.com, {dfeng, shi}@hust.edu.cn F309, Wuhan National Lab for Optoelectronics, Wuhan, China, 430074

More Related