1 / 25

A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

P M C - S I E R R A. Accelerating The Broadband Revolution. A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang. Outline. LCS: Linecard to Switch Protocol What is it, and why use it? Overview of 2.5Tb/s switch. How to build scalable crossbars.

Download Presentation

A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. P M C - S I E R R A Accelerating The Broadband Revolution A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

  2. Outline • LCS: Linecard to Switch Protocol • What is it, and why use it? • Overview of 2.5Tb/s switch. • How to build scalable crossbars. • How to build a high performance, centralized crossbar scheduler.

  3. 1 1 5 5 2 2 6 6 3 3 7 7 8 4 8 4 Up to 1000ft 1 2 3 4 5 6 13 14 15 16 17 18 25 26 27 28 29 30 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 7 8 9 10 11 12 31 32 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 Next-Generation Carrier Class Switches/Routers Switch Core Linecards

  4. Benefits of LCS Protocol • Large Number of Ports. • Separation enables large number of ports in multiple racks. • Distributes system power. • Protection of end-user investment. • Future-proof linecards. • In-service upgrades. • Replace switch or linecards without service interruption. • Enables Differentiation/Intelligence on Linecard. • Switch core can be bufferless and lossless. QoS, discard etc. performed on linecard. • Redundancy and Fault-Tolerance. • Full redundancy between switches to eliminate downtime.

  5. Main LCS Characteristics • Credit-based flow control • Enables separation. • Enables bufferless switch core. • Label-based multicast • Enables scaling to larger switch cores. • Protection • CRC protection. • Tolerant to loss of requests and data. • Operates over different media • Optical fiber, • Coaxial cable, and • Backplane traces. • Adapts to different fiber, cable or trace lengths

  6. 1: Req 2: Grant/credit Seq num Req 3: Data Grant LCS Ingress Flow control Linecard Switch Port Switch Fabric LCS LCS Switch Scheduler

  7. LCS LCS LCS LCS Adapting to Different Cable Lengths Switch Core Linecard Switch Fabric Linecard Switch Scheduler Linecard

  8. LCS LCS Over Optical Fiber10Gb/s Linecards 10Gb/s Linecard 10Gb/s Switch Port 2.5Gb/s LVDS 12 multimode fibers Switch Fabric LCS 12 multimode fibers Switch Scheduler GENET Quad Serdes

  9. 12 Serdes Channels LCS Protocol to OC192 Linecard Example of OC192c LCS Port

  10. Outline • LCS: Linecard to Switch Protocol • What is it, and why use it? • Overview of 2.5Tb/s switch. • How to build scalable crossbars. • How to build a high performance, centralized crossbar scheduler.

  11. Main Features of Switch Core 2.5Tb/s single-stage crossbar switch core with centralized arbitration and external LCS interface. • Number of linecards: • 10G/OC192c linecards: 256 • 2.5G/OC48c linecards: 1024 • 40G/OC768c linecards: 64 • LCS (Linecard to Switch Protocol): • Distance from line card to switch: 0-1000ft. • Payload size: 76+8B. • Payload duration: 36ns. • Optical physical layers: 12 x 2.5Gb/s. • Service Classes: 4 best-effort + TDM. • Unicast: True maximal size matching. • Multicast: Highly efficient fanout splitting. • Internal Redundancy: 1:N. • Chip-to-chip communication: Integrated serdes.

  12. 2.56Tb/s IP router 1000ft/300m Port #1 LCS Port #256 Linecards 2.56Tb/s switch core

  13. Cell Data Port Processor Port Processor Crossbar optics optics LCS Protocol LCS Protocol optics optics Request Grant/Credit Switch core architecture Port #1 Scheduler Port #256

  14. Outline • LCS: Linecard to Switch Protocol • What is it, and why use it? • Overview of 2.5Tb/s switch. • How to build scalable crossbars. • How to build a high performance, centralized crossbar scheduler.

  15. How to build a scalable crossbar • Increasing the data rate per port • Use bit-slicing (e.g.Tiny Tera). • Increasing the number of ports • Conventional wisdom: N2 crosspoints per chip is a problem, • In practice: Today, crossbar chip capacity is limited by I/Os. • It’s not easy to build a crossbar from multiple chips.

  16. 16x16 crossbar switch: Scaling: Trying to build a crossbar from multiple chips Building Block: 4 inputs 4 outputs Eight inputs and eight outputs required!

  17. 2x4 (2 I/Os) Cell time A A A A B A B A A A A A INT INT B B B A B B B A B B B B 2x4 (2 I/Os) Scaling using “interchanging”4x4 Example Reconfigure every half cell time Reconfigure every cell time Cell time 4x4

  18. 128 x 256 xbar 128 x 256 xbar 2.56Tb/s Crossbar operation Crossbar A Interchanger Interchanger A A B A A A B A 2x2 fixed “TDM” 2x2 fixed “TDM” B A B A Crossbar B B B B B 128 inputs 128 outputs

  19. Outline • LCS:Linecard to Switch Protocol • What is it, and why use it? • Overview of 2.5Tb/s switch. • How to build scalable crossbars. • How to build a high performance, centralized crossbarscheduler.

  20. How to build a centralized scheduler with true maximal matching? Usual approaches • Use sub-maximal matching algorithms (e.g. iSLIP) • Problem: Reduced throughput. • Increase arbitration time: Load-balancing • Problem: Imbalance between layers leads to blocking and reduced throughput. • Increase arbitration time: Deeper pipeline • Problem: Usually involves out-of-date queue occupancy information, hence reduced throughput.

  21. How to build a centralized scheduler with true maximal matching? Our approach is to maintain high throughput by: • Using true maximal matching algorithm. • Using single centralized scheduler to avoid the blocking caused by load-balancing. • Using deep, strict-priority pipeline with up-to-date information.

  22. Scheduler Priority p=0 Scheduler Priority p=1 Scheduler Priority p=2 Scheduler Priority p=3 Strict Priority Scheduler Pipeline • Scheduler Plane for • 2.56Tb/s • 1 priority • Unicast and multicast Port Processor optics LCS Protocol optics

  23. Scheduler Priority p=3 Scheduler Priority p=2 Scheduler Priority p=1 Scheduler Priority p=0 multicast P=0 P=0 P=0 P=1 P=1 P=1 P=1 P=2 P=2 P=2 P=2 P=3 P=3 P=3 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=0 P=3 Strict Priority Scheduler Pipeline Port Processor optics LCS Protocol optics Time 0 1 2 3 4 5 6

  24. Strict Priority Scheduler Pipeline Why implement strict priorities in the switch core when the router needs to support such services as WRR or WFQ? • Providing these services is a Traffic Management (TM) function, • A TM can provide these services using a technique called Priority Modulation and a strict priority switch core.

  25. Outline • LCS:Linecard to Switch Protocol • What is it, and why use it? • Overview of 2.5Tb/s switch. • How to build scalable crossbars. • How to build a high performance, centralized crossbarscheduler.

More Related