230 likes | 444 Views
FairCloud : Sharing the Network in Cloud Computing. Computer Communication Review(2012) Arthur : Lucian Popa Arvind Krishnamurthy Sylvia Ratnasamy Ion Stoica. Presenter : 段雲鵬. Outline. Introduction challenges sharing networks Properties for network sharing
E N D
FairCloud: Sharing the Network in Cloud Computing Computer Communication Review(2012) Arthur : Lucian Popa Arvind Krishnamurthy Sylvia Ratnasamy Ion Stoica Presenter : 段雲鵬
Outline • Introduction • challenges sharing networks • Properties for network sharing • Mechanism • Conclusion
Some concepts • Bisection bandwidth • Each node has a unit weight • Each link has a unit weight • Flow def • standard five-tuple in packet headers • B denotes bandwidth • T denotes traffic • W denotes the weight of a VM
Background • Resource in cloud computing • Network , CPU , memory • Network allocation • More difficult • Source, destination and cross traffic • Tradeoff • payment proportionality VS bandwidth guarantees
Introduction • Network allocation • Unkown to users , bad predictability • Fairness issues • Flows, source-destination pairs, or sources alone , destination alone • Difference with other resource • Interdependent Users • Interdependent Resources
Assumption • From a per-VM viewpoint • Be agnostic to VM placement and routing algorithms • In a single datacenter • Be largely orthogonal to work on network topologies to improve bisection bandwidth
Traditional Mechanism • Per flow fairness • Unfair when simply instantiating more flow • Per source-destination pair • Unfair when one VM communicates with more VMs • Per source • Unfair to destinations • Asymmetric • Only be fair for source or destination only
Examples • Per source-destination pair Per source If there is little traffic on the A-F and B-E , B(A)=B(B) =B(E) =B(F) =2*B(C) =2*B(D) =B(G) =B(H) B(E) =B(F) =0.25*B(D) , In the opposite direction, B(A) =B(B) =0.25*B(C)
Properties for network sharing(1) • A • Strategy proofness • Can’t increase bandwidth by modifying behavior at application level • Pareto Efficiency • X and Y is bottlenecked , when B(X-Y) increases, B(A-B) must decrease ,otherwise congestion will be worse B 1 M 10 M 10 M
Properties for network sharing(2) • Non-zero Flow Allocation • A strictly +B() between each pairs are expected • Independence • When T2 increase , B1 should not be affected • Symmetry • If all flows’ direction are swiched, the allocation should be the same L1 L2
Network weight and user’s payment. • Weight Fidelity(provide incentive) • Strict Monotonicity (Monotonicity) • If W(VM) increases ,then all its traffic must increase (not decrease) . • Proportionality • Guaranteed Bandwidth • Admission control • They are conflicting, tradeoff Subset P(2/3) Subset Q(1/3) No communication between P and Q
Per Endpoint Sharing (PES) • Can explicitly trade between weight fidelity and guaranteed bandwidth • NA denote the number of VMs A is communicating with • WS-D=f(WS,WD) , WA-B=WB-A • Normalized by L1 normalization • Drawback : Static Method (out of discussion)
Example • WA-D=WA/NA+WD/ND =1/2+1/2=1 • WA-C=WB-D=1/2+1/1=1.5 • Total Weight=4(4 VMs) • So WA-D=1/4=0.25 WA-C=WB-D=1.5/4=0.325
PES • For one host , B ∝ (closer VMs) instead of (remote VMs) • Higher guarantees for the worst case • WA−B = WB−A =α*WA/NA+ β*WB/ NB • α and β can be designed to weight between bandwidth guarantees and weight fidelity
One Sided PES (OSPES) • Designed for tree-based topology • WA−B = WB−A =α*WA/NA+ β*WB/ NB • When closer to A, α = 1 and β = 0 • When closer to B, α = 0 and β = 1
OSPES • fair sharing for the traffic towards or from the tree root • Resource allocation are depended on the root • Non-strict monotonicity W(A) = W(B) , If the access link is 1 Gbs, then each VM is guaranteed 500 Mbps When WA-VM1=1/1 WB-VMi=1/10(i=2,3……,11)
Max-Min Fairness • The minimum data rate that a dataflow achieves is maximized • The bottleneck is fully utilized • Can be applied
Conclusion • Problem : sharing the network within a cloud computing datacenter • Tradeoff between payment proportionality and bandwidth guarantees • A mechanism to make tradeoff between conflicting requirements
Reference • [1] Amazon web services. http://aws.amazon.com. • [2] M. Al-Fares, A. Loukissas, and A. Vahdat. A scalable, commodity data center • network architecture. In SIGCOMM. ACM, 2008. • [3] M. Al-Fares, S. Radhakrishnan, B. Raghavan, N. Huang, and A. Vahdat. • Hedera: Dynamic Flow Scheduling for Data Center Networks. In NSDI, 2010. • [4] H. Ballani, P. Costa, T. Karagiannis, and A. Rowstron. Towards Predictable • Datacenter Networks. In ACM SIGCOMM, 2011. • [5] D. P. Bertsekas and R. Gallager. Data networks (2. ed.). Prentice Hall, 1992. • [6] B. Briscoe. Flow rate fairness: Dismantling a religion. ACM SIGCOMM • Computer Communication Review, 2007. • [7] N. G. Duffield, P. Goyal, A. G. Greenberg, P. P. Mishra, K. K. Ramakrishnan, • and J. E. van derMerwe. A flexible model for resource management in virtual • private networks. In SIGCOMM, 1999. • [8] A. Ghodsi, M. Zaharia, B. Hindman, et al. Dominant resource fairness: fair • allocation of multiple resource types. In USENIX NSDI, 2011. • [9] A. Greenberg, J. R. Hamilton, N. Jain, S. Kandula, C. Kim, P. Lahiri, D. A. • Maltz, P. Patel, and S. Sengupta. VL2: A Scalable and Flexible Data Center