900 likes | 944 Views
H3C SR8800 Series 10G Core Router Technology Analysis. Date : 2008-3-23. Secret level : Public. Hangzhou H3C Technologies Co., Ltd. Catalog. Core Router Development Tendency H3C SR8800 Overview H3C SR8800 Technical Characteristics H3C SR8800 Networking H3C SR8800 Applications.
E N D
H3C SR8800 Series 10G Core Router Technology Analysis Date:2008-3-23 Secret level:Public Hangzhou H3C Technologies Co., Ltd.
Catalog • Core Router Development Tendency • H3C SR8800 Overview • H3C SR8800 Technical Characteristics • H3C SR8800 Networking • H3C SR8800 Applications
Data Network Demand Analysis • Raise working efficiency • Raise enterprise competitiveness • Informationalized basis platform • Cover all enterprises • Reliable topologies • Reliable devices • Reliable links • High quality network • Voice without delay • Fluent video image quality • Logical separation of different services • Defense against various attacks • Localization services of the original vendor • Fast on-site support of the original vendor • Advanced products and technologies • High expandability • Satisfy the needs for the development of several years in the future Basis Quality Reliable Telecommunications Data Network Secure Service Advance
Tomorrow Today Video Data Voice Online Gaming Message Location & Presence Storage Directory Voice VoIP Streaming Dial-up High Speed Internet Message Wireless Data Wireless Voice Service FR X.25 IP / MPLS Network IP Core PSTN ATM SDH PDH Cable FTTP/HFC Wireless DSL 3G RAN ADSL Access CDMA GSM/GPRS Ethernet Tendency for IP Multi-Services Bearer Network Multi-Services IP Bearer Network Integration of networks brings integration of services, integration of applications, and more opportunities, therefore bringing more benefits. • Independent network mode results in high network investment, difficult operating and maintenance tasks, and low capability in supplying new services. • Different networks integrating with the IP network is a certain trend, and multiple services will be born on a consistent IP/MPLS network. • Carriers from all over the world are building new generation IP multi-services carrier networks
High Performance, extensibility and service integration 1990s 2000 Today Performance, Expandability and Services Data sharing Internet and broadband Network Application Development Tendency Applications Services • Standardized services →User-defined services Performance • Data and the Internet → Integration of the three networks →Unified communication Connection • Best effort → Carrier-class reliability of devices → Carrier-class quality of services • High-density narrowband aggregation → Broadband and narrowband aggregation → High capacity broadband and narrowband aggregation with services
Advanced system architecture Distributed 10G NP-based hardware platform, and flexible service expansion capability and high processing performance High reliability Specific design for device reliability and network reliability, providing carrier-class reliability High security Protection for both devices and services in user network against attacks Granular QoS H-QoS provides three-level queue scheduling, providing granular SLA for different users and services. Developed for Industry Users --- H3C SR8800
Catalog • Core Router Development Tendency • H3C SR8800 Overview • H3C SR8800 Technical Characteristics • H3C SR8800 Networking • H3C SR8800 Applications
SR8802 SR8805 SR8808 SR8812 Orientation of H3C SR8800 Products 10G Core Routers 10G 2.5G SR6602 SR6608 GE MSR 50 100M MSR 30 MSR 20
H3C SR8800 Family H3C SR8800 Series 10G Core Routers • Developed by H3C, H3C SR8800 series 10G core routers are the flagship products of our router family. • H3C SR8800 is designed to operate on IP backbone network, core and distribution layers of IP dedicated network, POP, and the distribution layer of carrier network. • H3C SR8800 falls into four models as the following: SR8802/SR8805/SR8808/SR8812
Catalog • Core Router Development Tendency • H3C SR8800 Overview • H3C SR8800 Technical Characteristics • System Architecture • Service Capability • High Reliability • High Security • H3C SR8800 Networking • H3C SR8800 Applications
Architecture-Distributed 10G NP Hardware Platform Distributed 10G NP Routing engine Routing engine Crossbar Crossbar Crossbar Crossbar SRPU 0 SRPU 1 Data channel NP service engine NP service engine NP service engine NP service engine Incoming Packets cache Outgoing packets cache Incoming Packets cache Outgoing packets cache Incoming packets cache Outgoing packets cache Incoming Packets cache Outgoing packets cache Table lookup engine Table Lookup Engine Table lookup engine Table Lookup engine . . . QoS engine QoS engine QoS engine QoS engine OAM engine OAM engine PIC PIC PIC PIC … … … … Port 0 Port 1 Port N Port 0 Port 1 Port N Port 0 Port 1 Port N Port 0 Port 1 Port N LPU 0 LPU N • Based on the distributed 10GNP hardware platform, the SR8800 has excellent software upgrade, new service scalability and service processing capabilities.
Architecture-Unique Three-Engine Forwarding Structure Routing engine Routing engine Crossbar Crossbar Crossbar Crossbar SRPU 0 SRPU 1 Data channel NP supports all-service distributed processing with high performance and flexible service scalability Line-speed entry lookup NP service engine NP service engine NP service engine NP service engine Ingress buffer Egress buffer Ingress buffer Egress buffer Ingress buffer Egress buffer Ingress buffer Egress buffer Table Lookup engine Table lookup engine Table lookup engine Table lookup engine . . . QoS engine QoS engine QoS engine QoS engine OAM engine OAM engine PIC PIC PIC PIC Line-speed flow management … … … … Port 0 Port 1 Port N Port 0 Port 1 Port N Port 0 Port 1 Port N Port 0 Port 1 Port N LPU 0 LPU N • To improve the performance of core routers, some main issues need to be solved, including the ever-growing services, high QoS needs, and processing time and resources consumed by entry lookup and QoS scheduling. • Because the entry lookup and QoS scheduling demands and the corresponding models are stable, using the ASIC technology for entry lookup and QoS scheduling can improve router performance significantly. • Integrated with the high performance of ASIC and flexible scalability of NP, the SR8800 adopts a three-engine forwarding structure, namely, NP service engine, QoS engine and entry lookup engine. The service engine uses the NP technology to support flexible service scalability and upgrade. The QoS engine and entry lookup engine use the ASIC technology to support high-performance QoS and entry lookup capabilities.
Architecture-High Capacity and Performance An SRPU has two crossbars embedded to save slot space, and is enough to ensure normal operation, while double SRPUs offer hot-standby 1+1 redundancy and support 1.44T switching capabilities. Routing engine Routing engine Crossbar Crossbar Crossbar Crossbar Fabric adapter and crossbar work together to complete VoQ and E2E flow control and to support granular switch-fabric-level QoS. SRPU 0 SRPU 1 Data channel NP service engine NP service engine NP service engine NP service engine Ingress buffer Egress buffer Ingress buffer Egress buffer Ingress buffer Egress buffer Ingress buffer Egress buffer Table lookup engine Table lookup engine Table lookup engine Table lookup engine . . . QoS engine QoS engine QoS engine QoS engine OAM engine OAM engine PIC PIC PIC PIC … … … … Port 0 Port 1 Port N Port 0 Port 1 Port N Port 0 Port 1 Port N Port 0 Port 1 Port N LPU 0 LPU N • An SRPU has two crossbars embedded and is enough to ensure normal operation by itselt, while double SRPUs offer hot-standby 1+1 redundancy and support 1.44T switching capabilities, fully meeting the switching capability requirements for core routers. • The fabric adapter and crossbar work together to complete VoQ and E2E flow control and implement granular switch-fabric-level QoS, offering genuine SLA services to customers.
Dedicated OAM engine supports 30ms fault location and 20ms service switchover Architecture-Unique Link Fault OAM Design Routing engine Routing engine Crossbar Crossbar Crossbar Crossbar SRPU 0 SRPU 1 Data channel NP service engine NP service engine NP service engine NP service engine Incoming Packets cache Outgoing packets cache Incoming Packets cache Outgoing packets cache Incoming packets cache Outgoing packets cache Incoming Packets cache Outgoing packets cache Table lookup engine Table lookup engine Table lookup engine Table lookup engine . . . QoS engine QoS engine QoS engine QoS engine OAM engine OAM engine PIC PIC PIC PIC … … … … Port 0 Port 1 Port N Port 0 Port 1 Port N Port 0 Port 1 Port N Port 0 Port 1 Port N LPU 0 LPU N • Traditional routers use the CPU on the SRPU for fault detection, and the generation and forwarding of link detection protocol packets. If there are many services running, the CPU will be too busy to generate and send link detection packets timely, resulting detection faults and network oscillation, and therefore, 50ms fault location and service switchover cannot be realized. • With the distributed OAM architecture, each LPU of the SR8800 uses a dedicated OAM engine for link fault detection, which reduces CPU loads and improves link fault detection performance and CPU security, realizing 30ms fault location and 20ms service switchover.
High-capacity buffer to handle burst traffic Architecture-High Capacity Buffers Routing engine Routing engine Crossbar Crossbar Crossbar Crossbar SRPU 0 SRPU 1 Data channel NP service engine NP service engine NP service engine NP service engine Ingress buffer Egress buffer Incoming Packets cache Outgoing packets cache Ingress buffer Egress buffer Ingress buffer Egress buffer Table lookup engine Table lookup engine Table lookup engine Table lookup engine . . . QoS engine QoS engine QoS engine QoS engine OAM engine OAM engine PIC PIC PIC PIC … … … … Port 0 Port 1 Port N Port 0 Port 1 Port N Port 0 Port 1 Port N Port 0 Port 1 Port N LPU 0 LPU N • Delay is the round trip time of a packet between two nodes. All real-time services are delay-sensitive, such as VoIP, which needs a delay less than 200ms; otherwise, voice quality is unacceptable. • When the delay of a router is smaller than 200ms, packet loss occurs during network congestion, which cannot offer high-quality QoS. However, when the delay is larger than 200ms, services like VoIP cannot work normally. Therefore, a delay of 200ms is necessary for core routers. • Each NP of the SR8800 offers a 200ms ingress buffer and a 200ms egress buffer.
High-performance routing engines improve route calculation performance significantly Architecture-High-Performance Routing Engine Routing engine Routing engine Crossbar Crossbar Crossbar Crossbar SRPU 0 SRPU 1 Data channel NP service engine NP service engine NP service engine NP service engine Ingress buffer Egress buffer Incoming Packets cache Outgoing packets cache Ingress buffer Egress buffer Ingress buffer Egress buffer Table lookup engine Table lookup engine Table lookup engine Table lookup engine . . . QoS engine QoS engine QoS engine QoS engine OAM engine OAM engine PIC PIC PIC PIC … … … … Port 0 Port 1 Port N Port 0 Port 1 Port N Port 0 Port 1 Port N Port 0 Port 1 Port N LPU 0 LPU N • The SR8800 is equipped with high-performance CPUs as the routing engines, improving route calculation capabilities significantly. • Each SRPU of the SR8800 has double crossbars and three-level clocks embedded to ensure non-blocking switching, provide WAN clock, and save equipment room space and overall system power consumption, making the SR8800 compact and energy-saving core routers for customers.
Crossbar Crossbar SDRAM High-performance CPU Three-level clock interface Three-level clock chip USB interface CF card slot High-Performance SRPU • The high-performance SRPU is the core of the SR8800. It provides powerful routing capabilities, a variety of storage modes using the CF card and USB interface, and precise three-level clock sources.
Interface cards are separate from base boards to offer flexible service configurations Architecture-Separate Base Board and Interface Card Routing engine Routing engine Crossbar Crossbar Crossbar Crossbar SRPU 0 SRPU 1 Data channel NP service engine NP service engine NP service engine NP service engine Ingress buffer Egress buffer Incoming Packets cache Outgoing packets cache Ingress buffer Egress buffer Ingress buffer Egress buffer Table lookup engine Table lookup engine Table lookup engine Table lookup engine . . . QoS engine QoS engine QoS engine QoS engine OAM engine OAM engine PIC PIC PIC PIC … … … … Port 0 Port 1 Port N Port 0 Port 1 Port N Port 0 Port 1 Port N Port 0 Port 1 Port N LPU 0 LPU N • Interface cards are separate from base boards to support flexible service configurations. The base boards support all services, and interface cards provide various types of interfaces, allowing for flexible configurations in different network environments. • This design can protect customer investments to the maximum. For example, if you want to upgrade POS interfaces from 155M to 2.5G, you only need to change the interface cards without the need of purchasing LPUs.
Table lookup engine Table lookup engine NP service engine QoS engine Buffer OAM engine (under CPU) CPU slot Base board with Interface cardinstalled Base board with no interface cardinstalled Full-Service Base Board • The full-service NP base board supports services like IPv4, IPv6, MPLS VPN, QoS/H-QoS, GRE, and multicast VPN.
155M Super interface card Using command line POS 155M interface card POS 622M interface card 155M POS/622M POS/GE Switchover Using Commands ? 622M • You can change the interface speed of a super interface card to 155M POS/622M POS/GE using commands, so that you can have a wide range of interface speeds with limited investment.
Interface Cards-WAN Interfaces Super PSP4L PUP1L
Interface Cards-WAN Interfaces ET8G8L CL1G8L CL2G8L
Interface Cards-RPR Interfaces RUP1L RSP2L
Interface Cards-Ethernet Interfaces GP10L GP20R GT20R XP1L
Multicast traffic Multicast egress interface Architecture-Hierarchical Multicast Replication Switch-fabric-level multicast replication: switch fabric replicates multicast traffic to all FAs with multicast services. Routing engine Routing engine Crossbar Crossbar Crossbar Crossbar FA-level multicast replication: FA replicates multicast traffic to other FAs and crossbars within the board. SRPU 0 SRPU 1 Data channel NP-level multicast replication: NP replicates multicast traffic to FA and its multicast egress interfaces. NP service engine NP service engine NP service engine NP service engine Ingress buffer Egress buffer Incoming Packets cache Outgoing packets cache Ingress buffer Egress buffer Ingress buffer Egress buffer Table lookup engine Table lookup engine Table lookup engine Table lookup engine . . . QoS engine QoS engine QoS engine QoS engine OAM engine OAM engine PIC PIC PIC PIC … … … … Port 0 Port 1 Port N Port 0 Port 1 Port N Port 0 Port 1 Port N Port 0 Port 1 Port N LPU 0 LPU N Multicast ingress interface • Supporting three-level multicast replication, the SR8800 avoids wasting bandwidth while ensuring high-performance multicasting. • Routers not supporting switch fabric multicast replication have to treat multicasts as broadcasts, and send them to boards without multicast services, thus wasting bandwidth and decreasing router performance.
LPU LPU OAM plane LPU LPU Monitor plane Control plane Forwarding plane Architecture-One Chassis-Four Planes Structure Line card monitor unit Line card monitor unit . . . . . . SRPU System monitor unit Line card monitor unit Line card monitor unit OAM engine OAM engine . . . . . . OAM engine OAM engine CPU CPU . . . . . . CPU CPU CPU CPU NP NP . . . . . . Crossbar Crossbar NP NP • The control plane comprises the SRPU CPU system, the LPU CPU system and components such as management channels on the backplane. Its main functions include protocol calculation, routing table maintenance, device management, and operation and maintenance management. It is the core of the router. • The forwarding plane comprises the switch fabric crossbars, the three forwarding engines, the data channels and other components on the backplane. It is responsible for processing various services and data forwarding, such as ACL/QoS, IP forwarding, MPLS VPN and multicast. • The OAM plane comprises the SRPU CPU, LPU OAM engines, and OAM channels. It is responsible for network protocols detection and service switchovers, such as BFD support for BGP/IS-IS/OSPF/RSVP/VPLS PW/VRRP, and 30ms fault detection and 20ms switchover of services . • The monitor plane comprises the control systems and channels. It is responsible for power and fan systems monitor, alarm and control. • The four planes are independent of one another.
Base board FW module System Other interfaces Storage IPS module System Storage **Service module Other interfaces System Storage Architecture-Unique Open Application Architecture (OAA) • Based on the OAA, the SR8800 provides standard application interfaces, allowing customers and third-party vendors to develop their own services on it, such as embedded firewall, embedded IPS. This helps provide value-added services and speed up intelligent IP network development. • The SR8800 is the only open core router in the industry.
Catalog • Core Router Development Tendency • H3C SR8800 Overview • H3C SR8800 Technical Characteristics • System Architecture • Service Capability • High Reliability • High Security • H3C SR8800 Networking • H3C SR8800 Applications
L2 L3 L4 L2 L3 L4 CAR CAR Congestion Avoidance Granular QoS Capability Switch-fabric-level deep QoS, supporting VoQ and E2Etraffic control and avoiding HOL 200ms packet buffering capability, supporting burst traffic H-QoS: three-level queue scheduling, supporting granular QoS 200ms packet buffering capability, supporting burst traffic Egress Buffer Ingress Buffer Traffic policing RED WRED Incoming packets Switch-fabric VOQ and E2E flow control Traffic policing Traffic Classification Traffic classification GTS H-QoS queue scheduling PQ/LLQ/WFQ/CBWFQ Massive ACL rules, 64K/NP Outbound CAR, granularity of 1K Massive ACL rules, 64K/NP inbound CAR, granularity of 1K Advanced congestion avoidance mechanisms Diverse shaping modes, port-based or queue-based traffic shaping
1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 n n n n n n n n Origination of HQoS VPN 1gold users OA Production CE device Monitoring Common QoS processing: schedule all traffic of all users uniformly according to their service priorities Phone VPN 2 silver users OA n 1 3 n 2 3 2 2 1 PE router Production CE device The monitoring traffic of silver users and bronze users is sent, while the monitoring traffic of gold users is dropped because bandwidth is preempted. Monitoring Phone The OA traffic of bronze users is sent, while that of gold users and silver users is dropped, because bandwidth is preempted. gold users and silver users do not get high SLA. 。 。 。 VPN n bronze users OA • Common QoS processes all traffic of all users uniformly according to the service priorities. It cannot provide granular differentiation services based on users and the service types of each user, that is, it cannot differentiate the service types of each user when differentiating traffic of users. • Hierarchical QoS (HQoS) provides granular differentiation services based on users and the service types of each user. Production CE device Monitoring Phone
1 1 1 1 1 1 1 VPN 1 1 1 1 1 VPN 1 1 2 2 2 2 2 2 VPN 2 2 2 2 VPN 2 2 n n 2 VPN n n n n VPN n n n n n n HQoS Applications HQoS performs level-3 scheduling based on user service levels and provides high-quality QoS services for high-priority services. VPN 1 gold users OA Perform level-2 scheduling according to user levels and provide different guaranteed bandwidth for different users. CE device Production Monitoring Phone Perform level-1 scheduling based on service levels (schedule by packet priority) and provides high-quality QoS services for high-priority services. VPN 2 silver users OA 2 1 3 1 1 3 2 2 1 PE router Production CE device The monitoring traffic of gold users and silver users is sent, while the monitoring traffic of bronze users is dropped. Monitoring Phone All users are served according to their SLAs. 。 。 。 The OA traffic of gold users is sent, and that of silver users and bronze users is also sent. VPN n bronze users OA CE device • HQoS enables each user to be served according to its SLA, thus strictly guaranteeing proper bandwidth for each user. Production Monitoring Phone
HQoS Scheduling Model Service 1 Service 2 Service class 1 VPN1 Service 3 Service 4 Service 1 Physical port Service class 2 Service 2 VPN2 Service 3 Service 4 Service class 3 . . . Service 1 Service 2 VPN n Service 3 Service class 4 Service 4 Advanced H-QoS queue scheduling PQ/LLQ/WFQ/CBWFQ Perform level-3 scheduling according to user service levels Perform level-2 scheduling according to user levels Perform level-1 scheduling according to service levels • SR8800 supports three levels of HQoS queue scheduling, 384k queues, and provides granular SLA services for users.
QoS-VOQ I Idle path • On a cross road with only one lane, even if the northward road is idle, the ambulance (green car) behind the red cars cannot go through the crossroad to the northward road until all the cars in the front have passed the crossroad . Because there is only one lane, multidirectional scheduling is impossible. If congestion occurs in one direction, congestion also occurs in the other directions. • Such a congestion is called Head of Line (HOL) Blocking. Destination direction of the green car Blocked path Destination direction of red cars Red light for straightgoing Green light for turning right The green car is blocked Queuing for straightgoing • As shown in the figure, add a lane for turning right. Even if the straightgoing lane is blocked, the lane for turning right is still available. • You can see that the best solution for HOL blocking is assigning different lanes for different directions. Destination direction of the green car Idle path Add one lane for turning right The green car passes through normally Blocked path Destination direction of red cars Red light for straightgoing Green light for turning right Queuing for straightgoing
10 G QoS-VOQ II If the outgoing port of the router is congested, the incoming port will be instructed to suspend data forwarding. • HOL blocking may also occur on a router. As shown in the diagram, user A, user B, server A, and server B are connected to the router through a 10G port respectively. User A sends data to server A at10 Gbps. User B sends data to server A at 5 Gbps. Congestion Server A Congestion User A 5G Congestion User B sends data to server B at 5 Gbps. 5G Congestion occurring to data for Server A causes the data for server B to be blocked. User B Server B Lower priority data cannot be forwarded out of the outgoing port. It is blocked. Crossbar Congestion User B Because of HOL blocking in the queue, the subsequent data cannot be forwarded in time.
QoS-VOQ III • The root cause for HOL blocking is that there is only one queue for all forwarding directions. You cannot perform queue scheduling independently for different forwarding directions. If an individual queue is available for each forwarding direction, you can optimize packet forwarding and avoid HOL blocking by round robin queue scheduling between different queues. Queue scheduling between queues to server A Server A Crossbar User B to server B Server B • As shown in the diagram above, user B of the crossbar port has separate queues for server A and server B respectively. With queue scheduling between different queues, the data to server B can be sent based on the queue scheding, without having to wait till all data to server A is sent. • Virtual Output Queue (VOQ) is to implement multiple output queues for multiple output directions on a physical channel.
The four VOQs of the egress to server A are scheduled by SP. 3 2 1 0 3 2 1 0 The four VOQs of the egress to server B are scheduled by SP. QoS-VOQ IV • As shown in the diagram below, assigning one output queue for one output direction just solves the HOL blocking but cannot schedule packets of different priorities in the same direction. High-priority packets Server A Low-priority packets Crossbar User B Server B Cannot schedule packets in the same VoQ according to packet priority. • As shown in the diagram below, assigning multiple output queues for one output direction can schedule packets of different priorities in the same direction, thus ensuring that high-priority delay-sensitive packets can be forwarded by the crossbar port preferentially to the destination direction. Crossbar Server A User B Server B
Port 0 has four priority VOQs 3 2 1 0 Port 1 has four priority VOQs 3 2 1 0 Port 10 has four priority VOQs 3 2 1 0 Four global priority VoQs. 3 2 1 0 QoS-VOQ V VoQ in FA to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 Crossbar to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to M to M to M to M to M to M to M to M to M to M to M to M to M to M to M to M to M • The SR8800 VoQs reside in the fabric adapters (FAs) of LPUs. An FA provides four priority VoQs for each egress of the crossbar port. • For multicast traffic and broadcast traffic, an FA provides four global VoQs (queues to M in the diagram). Global VoQs are dedicated to multicast traffic and broadcast traffic. They can forward data to multiple directions at the same time according to traffic priority, thus improving switching efficiency. • Comparison: The crossbar ports of the RSP720 engine of a Cisco 7600 also support cell-based switching and VoQs. However, the RSP720 engine provides only one VoQ for a direction. Therefore, it cannot schedule packets in the same direction according to priorities.
QoS-E2E Flow Control • Application scenario: • Server A, user B and user C are connected to the router through a 10G port respectively. • User C sends high-priority traffic at 10G to server A, and user B sends low-priority traffic at 10G to server A. Congestion occurs on the outgoing port. 10G (high priority) Congestion Server A User C 10G(low-priority) User B
VoQ in FA C to A C to A C to A C to A 3 2 1 0 VoQ in FA 3 2 1 0 B to A B to A B to A B to A QoS-E2E Flow Control II The bandwidth of a crossbar port is 12G, the traffic sent by user B and that by user C is congested on the crossbar port corresponding to server A. So 6G traffic is allowed for B, and 6G for C, respectively. VoQ FA User C 3 2 FA notifies the other FAs of the congestion of the low priority queues. 1 Tx Q notifies the FA of the congestion status of the low priority queues. 0 10 G Crossbar 8 Priorities Tx Queues of port A 7 User B 10 G 6G B + 6G C 6 C to A A to A C to A A to A C to A A to A C to A A to A 5 10 G 4 10G of high-priority traffic is permitted to pass through 3 10G of high-priority traffic is scheduled and forwarded. 2 1 0 FA suspends forwarding upon knowing that congestion occurs in the output queues. B to A B to A B to A B to A Because the outgoing port bandwidth is 10G, for the 12G traffic from the crossbar, 6G high-priority traffic will be scheduled and forwarded preferentially, 4G low-priority traffic can be scheduled and forwarded, and the remaining 2G low-priority traffic will be dropped due to congestion in the low-priority queues. • With the E2E flow control mechanism, when the outgoing port is congested, high-priority traffic will be scheduled preferentially, thus guaranteeing QoS for high-priority services. • Comparison: At present, only the SR8800 supports crossbar E2E flow control in the industry.
User A Ingress Ingress Buffer C to A C to A C to A C to A 10 G VoQ in FA C to A C to A C to A C to A VoQ FA 3 2 3 1 2 0 1 VoQ in FA 0 8 Priorities Tx Queues of port 3 2 7 1 6 C to A C to A C to A C to A 0 B to A B to A B to A B to A 5 4 User B Ingress 3 2 B to A B to A B to A B to A 10 G 1 Ingress Buffer 0 B to A B to A B to A B to A QoS-Ingress Buffer The bandwidth of a crossbar port is 12G, the traffic forwarded by B and that by C is congested on the crossbar port corresponding to server A. So 6G traffic is allowed for B, and 6G for C, respectively. 10 G FA notifies the other FAs of the congestion status of the low priority queues. Tx Q notifies the FA of the congestion status of the low priority queues. 10 G Crossbar 6G B + 6G C 10 G 10 G FA suspends forwarding upon knowing that congestion occurs in the output queues. Because the outgoing port bandwidth is 10G, for the 12G traffic from the crossbar, 6G high-priority traffic will be scheduled and forwarded preferentially, 4G low-priority traffic can be scheduled and forwarded, and the left 2G low-priority traffic will be dropped due to congestion in the low-priority queues. Packets are buffered in the Ingress Buffer • As shown in the diagram above, when E2E flow control functions, ingress packets must be buffered. Otherwise, packet loss will occur. • The ingress buffer size determines the size of the total traffic that can be buffered, affecting the QoS capability of the router. • The Ingress Buffer of the SR8800 can buffer packets for 200 milliseconds.
QoS-Egress Buffer 8 Priority Tx Queues of port packet packet packet packet 7 Suspend sending packets Forward packets Ingress Buffer is congested 6 packet packet packet packet 5 packet packet packet packet Congestion Ingress Buffer packet packet packet packet 4 packet packet packet packet packet packet packet packet packet 3 2 802.3.1X packet packet packet packet Send 802.3x flow control information packet packet packet packet 1 0 packet packet packet packet Packets are buffered in the Egress Buffer • When flow control is enabled on the routers at both sides, if the receiving router is congested, it sends 802.3x pause frames to instruct the sending router to suspend packet sending; upon receiving the pause frame, the sending router suspends sending packets to the receiving router for a certain period. • To avoid packet loss, the sending router buffers the packets in the Egress Buffer. The Egress Buffer size determines the packet buffering capability of the sending router. • The Egress Buffer of the SR8800 can buffer packets for 200 milliseconds.
VoQ FA 3 2 1 0 QoS-Crossbar Cell-based Switching FA fragments the packets into cells of 4 to 128 bytes. Cell-based Switching on Crossbar Ingress Buffer Ingress packet Crossbar VoQ in FA 8 Priority Tx Queues of port Cell Cell 3 7 2 6 1 5 0 4 3 2 Cells are assembled into packets again 1 0 • The cell size is far smaller than the packet size. Therefore, cell-based switching can reduce jitter of traffic because of smaller granularity, smooth the total traffic, improve the system QoS capability, and avoid the case that short packets wait for long packets. • Usually cell-based switching uses fixed-length cells, which will cause the N+1 problem. Suppose the cell length is fixed to 128 bytes. A 262-byte packet will be segmented into two 128-byte cells. The remaining 6 bytes will be padded with 122 bytes to form a 128-byte valid cell for switching. As a result, the 122 bytes of overhead is generated, with the overhead rate reaching 31.8%. • The SR8800 Crossbar is capable of variable length cell switching. The cell length ranges from 4 to 128 bytes, that is, the cell length can be 4, 8, 12, 16, or 128 bytes. The variable length cell switching solves the N+1 problem effectively. For example, a 262-byte packet is fragmented into two 128-byte cells. The remaining 6 bytes are padded with another 2 bytes to form a valid 8-byte cell for switching, with the overhead rate being only 0.76%. • Comparison: Cisco 7600 adopts fixed length cell switching, so that the overhead is higher.
CE CE PE PE Data service VPN1 CE Voice CE VPN2 Video VPN3 CE CE PE PE Others VPN4 CE CE MPLS VPN Service Isolation • Distinguish different services, such as voice, video and data services on the PE device, encapsulate them to VPN, and implement security isolation for them • Carry multiple services with MPLS VPN, which provides security protection equivalent to leased line security. • Support HoPE to extend VPN • Support IPv6-based MPLS VPN (6PE) • Support access to MPLS VPN, PPP, ATM, and Eth/VLAN with various methods • Support static route, EBGP, RIP, and OSPF between PE and CE • Support cross-AS schemes, such as VRF-to-VRF, MP-EBGP, and Multi-Hop MP-EBGP • Support point-to-point Layer-2 MPLS VPN Martini / Kompella VLL • Support point-to-multipoint Layer-2 MPLS VPN Martini / Kompella VPLS
QoS for MPLS VPN Classify service traffic and tag 802.1P, COS or DSCP Map IP priority to the EXP field of the label. Perform queue scheduling on the outgoing interface Identify the label and perform corresponding scheduling (based on bandwidth or priority) according to the EXP field of the label Perform queue scheduling according to IP COS or DSCP to transmit multiple flows to the CE CE CE P P PE PE CE MPLS CE PE P P IP Diffserv IP Diffserv MPLS Diffserv
Link Emulation by PWE3 PE PE • Pseudo Wire Emulation Edge-to-Edge (PWE3) is a technology that emulates ATM, Ethernet and PPP services in a packet switched network. A pseudo wire encapsulates PDUs of specific services , carries the PDUs over the path or tunnel between inbound interface and outbound interface, manages the timing and sequence of the PDUs, and emulates the functions required by these services. • The SR8800 supports the PWE3 feature and 16K PWE3 connection, fully satisfying users’ requirements for PWE3 connections.
Multicast source Multicast VPN VPN-A/Site2 CE-A2 Receiver VPN-A/Site1 PE2 IBGP CE-A1 Backbone P PE1 VPN-A/Site3 MPLS Core IBGP PE3 Receiver CE-A3 CE-B1 IBGP VPN-B/Site1 CE-B2 VPN-B/Site2 Receiver • MPLS/BGP VPN has been widely applied. Users in a VPN requires the VPN to provide multicast services. • In the history version of draft-rosen-vpn-mcast, there were three MVPN solutions. In the latest version 08, two of them are removed and the MD solution is kept. • The SR8800 supports MD multicast VPN, which can be distributed on separate service cards, providing high performance and flexible configuration.
Multicast source Multicast source Multicast MBGP IGMP RP MSDP Anycast RP RP RP MSDP MBGP MSDP MBGP MSDP MBGP RP RP RP MAN MAN MAN PIM/PIM6 PIM/PIM6 PIM/PIM6 Layer-3 network IGMP Proxy/Snooping IGMP Proxy/Snooping Layer-2 network Multicast switch Multicast switch Multicast switch Home GW Home GW Home GW TV PC TV PC TV PC • PIM-SM or PIM-SSM runs within the domains, and MBGP and MSDP run between the domains. All ASs are required to support PIM, MBGP and MSDP. • PIM / MBGP /MSDP solution is a mature solution for inter-domain multicast network. • Between ASs, external MBGP peers are configured for edge routers, and external MSDP peers are configured for RPs, . Within an AS, internal MBGP peers are configured for internal routers as required, and internal MSDP peers are configured for internal RPs running Anycast RP. All ASs run PIM protocol.
Internet NAT/NAT Multi-Instance Mail server NAT 10.1.1.4 202.10.88.2 Web server Public network address 10.1.1.3 Private network address 10.1.1.20 10.1.1.3 • Support repeated multiplexing of a port and automatic 5-tuple collision detection, enabling NAPT to support unlimited connections • Support backlist in NAT/NAPT/internal server • Support limit on the number of connections • Support session log • Support multi-instance
Resolve NTE packets • Collect statistics information to the database • Analyze data and generate traffic reports • Analyze network packets • Extract traffic statistics matching the preset criteria • Output traffic statistics NetStream--Network Traffic Analysis Packet Netstream Xlog NTAS (Network traffic analysis system) • NetStream cooperates with XLog seamlessly to provide accurate and detailed network traffic analysis reports.
SITE 1 RPR SITE 7 SITE 6 SITE 8 SITE 5 PE PE PE PE PE 10G/2.5G RPR SITE 4 SITE 2 SITE 3 • Resilient Packet Ring (RPR), a new protocol on MAC layer, has the following advantages: • High usage of ring bandwidth • Self-healing • Automatic topology discovery, and node plug-and-play • Protection switching using Steering or Wrapping, with fast recovery time as 50ms, satisfying the carrier-class requirement • Weighted fair algorithm for bandwidth allocation • Comply with the IEEE 802.17 standard • Support 10G/2.5G RPR • Support cross-board RPR