1 / 35

Configuring a Load-Balanced Switch in Hardware

Configuring a Load-Balanced Switch in Hardware. Srikanth Arekapudi, Shang-Tse (Da) Chuang, Isaac Keslassy, Nick McKeown Stanford University. Outline. Load Balanced Switch Scalability Reconfiguration Algorithm Hardware Implementation. Typical Router Architecture.

jael
Download Presentation

Configuring a Load-Balanced Switch in Hardware

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Configuring a Load-Balanced Switch in Hardware Srikanth Arekapudi, Shang-Tse (Da) Chuang, Isaac Keslassy, Nick McKeown Stanford University

  2. Outline • Load Balanced Switch • Scalability • Reconfiguration Algorithm • Hardware Implementation

  3. Typical Router Architecture Input Output Input Output Input Output N x N Switch Fabric R 1 R 2 R R 1 R R Scheduler

  4. Load-Balanced Switch Out Out Out Forwarding mesh Load-balancing mesh R R In 3 2 1 R/N R/N R/N R/N R/N R/N R/N R/N R R In R/N R/N R/N R/N R/N R/N R/N R R R/N In R/N R/N

  5. Load-Balanced Switch Out Out Out Forwarding mesh Load-balancing mesh R R In R/N R/N R/N R/N 1 R/N R/N R/N R/N R R In R/N R/N 2 R/N R/N R/N R/N R/N R R R/N In R/N R/N 3

  6. Load-Balanced Switch Out Out Out Forwarding mesh Load-balancing mesh R R In R/N R/N R/N R/N R/N R/N • 100% throughput for broad class of traffic • No scheduler needed a Scalable R/N R/N R R In R/N R/N R/N R/N R/N R/N R/N R R R/N In 3 R/N R/N

  7. N*2R/N = 2R = R +R R R In In Out Out In In Out Out In In Out Out In In Out Out A Single Combined Mesh 2R/N

  8. R R (N-1)*2R/N < R +R In In Out Out In In Out Out In In Out Out In In Out Out A Single Combined Mesh 2R/N

  9. 1 2 3 4 7 1 2 3 4 5 8 6 1 2 3 4 5 6 7 8 ScalabilityN=8 2R/8

  10. 8 2 1 3 4 7 6 5 5 3 2 1 6 7 8 4 When N is Too LargeDecompose into groups (or racks) 2R 2R 4R 4R 4R/4 2R 2R

  11. 1 1 2 L L 2 1 2 L L 2 1 When N is Too LargeDecompose into groups (or racks) Group/Rack 1 Group/Rack 1 2R 2R 2RL/G 2R 2R 2RL 2RL 2R 2R 2RL/G Group/RackG Group/Rack G 2RL/G 2R 2R 2R 2R 2RL 2RL 2R 2RL/G 2R

  12. 2RL/G 2RL/G 2RL/G 2RL/G 2RL/G 2RL/G + + = 2RL/G 2RL/G 2RL/G 2RL 2RL/G + = + L L 2 L 1 1 2 1 2 2 L 1 When Linecards are MissingFailures, Incremental Additions, and Removals… Group/Rack 1 Group/Rack 1 2R 2R 2RL 2RL/G 2R 2R 2RL 2RL 2R 2R • Solution: replace mesh with sum of permutations Group/RackG Group/Rack G 2R 2R 2R 2R 2RL 2RL 2R 2R

  13. 1 1 2 L L 2 1 1 2 L L 2 When Linecards Fail Group/Rack 1 Group/Rack 1 2R 2R 2R 2R MEMS Switch 2R 2R MEMS Switch Group/RackG Group/Rack G 2R 2R 2R 2R 2R 2R

  14. Questions • Number of MEMS Switches? • TDM Schedule?

  15. Example – 3 Linecards 2R/3 In In R R Out Out In In R R Out Out In In R R Out Out

  16. 2 1 1 1 Example2 Groups Group/Rack 1 Group/Rack 1 2R 2R 1 8R/3 4R 4R 2R 2R 2 4R/3 Group/Rack 2 Group/Rack 2 4R/3 2R 2R 2R 2R/3 2R

  17. 2 1 1 1 Example2 Groups Group/Rack 1 Group/Rack 1 2R 2R 1 4R/3 4R 4R 2R 2R 4R/3 2 4R/3 Group/Rack 2 Group/Rack 2 4R/3 2R 2R 2R 2R 2R/3

  18. Number of MEMS Switches • MEMS switches between groups i and j • Total Number of MEMS switches:M ≤ L+G-1

  19. Questions • Number of MEMS Switches? • TDM Schedule?

  20. Constraints on groups at each time-slot 2R 2R 2R 2R Constraints on linecards at each time-slot 2 1 1 2 2 1 TDM Schedule Group A Group A 2R 2R 1 4R 4R 2R 2R 2 Group B Group B 2R 2R 4R 4R 2R 2R

  21. Rules for TDM Schedule At each time-slot: • Each transmitting linecard sends one packet • Each receiving linecard receives one packet • (MEMS constraint) Each transmitting group i sends at most one packet to each receiving group j through each MEMS connecting them In a schedule of N time-slots: • Each transmitting linecard sends exactly one packet to each receiving linecard

  22. Tx Group A Tx Group B TDM Schedule

  23. Tx Group A Tx Group B TDM Schedule

  24. Tx Group A Tx Group B Bad TDM Schedule

  25. TDM Schedule Algorithm • The algorithm constructs three consecutive schedules. • Sending Groups to Receiving Groups • Connection Assignment Problem • Sending Linecards to Receiving Groups. • Matrix Decomposition Problem • Sending Linecards to Receiving Linecards • Matrix Decomposition Problem

  26. TDM Schedule

  27. Tx Group A Tx Group B Good TDM Schedule

  28. Tx Group A Tx Group B Good TDM Schedule

  29. G1 G1 2 2 G1 G1 0 0 G2 G2 1 1 G2 G2 1 0 0 1 G3 G3 1 1 G3 G3 0 1 0 Connection Assignment Problem Not Scheduled Scheduled

  30. G1 G1 0 0 0 G2 G2 1 0 0 1 G3 G3 1 1 1 G1 G1 G1 G1 0 0 0 0 1 G2 G2 1 0 G2 G2 1 0 1 0 1 0 G3 G3 G3 G3 0 1 0 0 1 1 Connection Assignment Problem After Greedy Back Tracing G1 G1 0 0 G2 G2 1 0 1 0 G3 G3 1 1

  31. Matrix Decomposition Problem 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 1 0 1 1 0 1 0 1 0 1 0 1 0 1 1 1 1 1 0 0 0 1 0 1 1 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 + + =

  32. Matrix Decomposition Problem • Use of sparsity of matrices to represent the ones as a row-column pair • Consists of two stages • Greedy Algorithm • Slepian-Duguid Algorithm • Decomposes all the permutation matrices at once • Uses the row-column pair list structure

  33. Synthesis • 40 Groups and 640 Linecards • 0.13u process • Cycle time within 4ns • Connection Assignment Problem • 10K gates • 24Kbits memory • Matrix Decomposition Problem • 25K gates • 230Kbits of memory

  34. Reconfiguration Time

  35. Thank you.

More Related