250 likes | 414 Views
Enhancing Both Fairness and Performance Using Rate-Aware Dynamic Storage Cache Partition. Yong Li, Dan Feng and Zhan Shi School of Computer Wuhan National Laboratory for Optoelectronics Huazhong University of Science and Technology. DISCS-2013 – November 18, 2013. TOC.
E N D
Enhancing Both Fairness and Performance Using Rate-Aware Dynamic Storage Cache Partition Yong Li, Dan Feng and Zhan Shi School of Computer Wuhan National Laboratory for Optoelectronics Huazhong University of Science and Technology DISCS-2013 – November 18, 2013
TOC • Background & Motivation • Solution • Experimental Evaluation • Conclusions
Background • Data center is becoming increasingly consolidated • What the consolidation gives for us • Reduce the cost of storage management • Avoid low storage utilization • Achieve better data sharing • The affect of the consolidation • Exist a competition for resources among the concurrently executing applications, such as cache
Motivation Example • Two applications with diverse access rates • App A: access rate = 1 Blocks/s • App B: access rate = 4 Blocks/s • The cache size is 6 Blocks • Using LRU replacement scheme
Running Alone • Hit ratio of application A: 25%
Running Alone • Hit ratio of application A: 25% • Hit ratio of application B: 20%
Running Concurrently • Hit ratio of application A: 0% • Hit ratio of application B: 20%
Motivation experiment • Iozone: 11.9%~11.6% • TPC-C: 31.1%~20.1%
TOC • Background & Motivation • Solution • Experimental Evaluation • Conclusions
Overview of Mechanism • Partition-based scheme • Explicitly divide a cache into several partitions, one for each application • Three modules in scheduler • Utility predictor • Candidate finder • Segment dispatcher
Analysis of Utility • Utility on Fairness • Use the maximum slowdown metric to evaluate fairness • Utility on Performance • refers to the change in overall performance by adding or shrinking a single cache block
Victim partition selection • How to select victim partition • The 1th principle: allocation must satisfy fairness goal • We specify the goal of fairness by setting an variable SDU • We want the maximum slowdown can be less than this value • The 2th principle: performance degradation acceptable • We set an lower bound for △P, denote as △PL • If the performance degradation exceed the △PL, the allocation will be considered as excessive, and should be avoided.
TOC • Background & Motivation • Solution • Experimental Evaluation • Conclusions
Experimental Evaluation • Enhanced the DULO cache simulator • Used the DULO together with the Disksim simulator Table 1: the parameter of disksim simulator
Workloads • We use several benchmarks Table 2: the workloads
TOC • Background & Motivation • Solution • Experimental Evaluation • Conclusions
Conclusion • This paper focuses on how to solve the unfair allocation among multiple heterogeneous applications. • We propose an novel adaptive cache management algorithm, which aim to guarantee the fair allocation of cache space among heterogeneous applications, while maximizing the overall performance. • Demonstrate the performance and fairness under a series of wide ranging experiments
Thank you! li.yong.xyz@gmail.com, {dfeng, shi}@hust.edu.cn F309, Wuhan National Lab for Optoelectronics, Wuhan, China, 430074