1 / 18

Changing scene in DC

An Efficient Flow Cache algorithm with Improved Fairness in Software-Defined Data Center Networks Bu Sung Lee 1 , Renuga Kanagavelu 2 and Khin Mi Mi Aung 2 1 Nanyang Technological University, Singapore 2 A-STAR (Agency for Science and Technology), Data Storage Institute, Singapore.

lilike
Download Presentation

Changing scene in DC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Efficient Flow Cache algorithm with ImprovedFairness in Software-Defined Data Center Networks Bu Sung Lee 1, Renuga Kanagavelu2 and Khin Mi Mi Aung21Nanyang Technological University, Singapore2 A-STAR (Agency for Science and Technology), Data Storage Institute, Singapore

  2. Changing scene in DC • Data Center size has grown to a scale that we never imagine (http://storageservers.wordpress.com/2013/07/17/facts-and-stats-of-worlds-largest-data-centers/ ) . • Google: 900,000 servers across 13 data centers • Amazon: 450,000 servers, in 7 locations • Virtualisation. • Changing Data Center Network traffic (North-South to East-West) • Traffic Types : mice and elephant.

  3. Constraints • Openflow switches flow table can hold up to 1500 entries. • It is possible to increase TCAM entries, but it consumes lots of ASIC space, power and cost. • Centralized controller

  4. Limitations of 3-tier network architecture Redundant paths are not used (due to STP) => Total bandwidth reduction issue Core Switch Aggregation Switches … Interface 2 … … Top of Rack Switches Interface 1 MAC Addr: 7C-BA-B2-B4-91-10 Racks of servers MAC Addr: 62-FE-F7-11-89-A3 Forwarding table: Table size increases proportionally to the number of servers => Scalability issue

  5. Traffic types

  6. Technology used • Flow cache organised into separate buckets for elephant and mice. • Determine flow type by using 100 Mbytes in 5 second threshold. • Used the vLAN priority code bit (PCB) to indicate. • Uses dynamic index hashing. • Cache replacement strategy • Uses Least Recently Used (LRU)

  7. Experimental set-up

  8. Architecture

  9. Dynamic index Hashing

  10. Bucket Expansion

  11. Performance Evaluation Comparison of cache hit Ratio

  12. Performance Evaluation

  13. Performance Evaluation Look up Time

  14. Performance Evaluation

  15. DDR3 SDRAM 16bits DDR3 SDRAM 16bits DDR3 SDRAM 16bits Memory Cache architecture Header SHA Value Memory Controller Look up Update Drop Add entry Input Buffer Look-Aside Interface SHA - 1 Action Output Buffer Header Action 4 64 bits (8Bytes)

  16. Conclusions Simple and effective means to address the overload on the controller Fast lookup Reduced cache miss ratio with LRU We have developed a NVRAM version of the cache for plugging into switches.

  17. Future work • DC VM Placement strategy • Power aware • Network aware • Resilience • Inter-domain Openflow • Software defined everything

More Related