1 / 42

Quick Start Guide FabricPath

Quick Start Guide FabricPath. Architecture & Solutions Group US Public Sector Advanced Services Mark Stinnette, CCIE Data Center #39151. Date 13 August 2013 Version 1.10.2.

bowen
Download Presentation

Quick Start Guide FabricPath

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Quick Start GuideFabricPath Architecture & Solutions Group US Public Sector Advanced Services Mark Stinnette, CCIE Data Center #39151 Date 13 August 2013 Version 1.10.2

  2. This Quick Start Guide (QSG) is a Cookbook style guide to Deploying Data Center technologies with end-to-end configurations for several commonly deployed architectures. This presentation will provide end-to-end configurations mapped directly to commonly deployed data center architecture topologies. In this cookbook style; quick start guide; configurations are broken down in an animated step by step process to a complete end-to-end good clean configuration based on Cisco best practices and strong recommendations. Each QSG will contain set the stage content, technology component definitions, recommended best practices, and more importantly different scenario data center topologies mapped directly to complete end-to-end configurations. This QSG is geared for network engineers, network operators, and data center architects to allow them to quickly and effectively deploy these technologies in their data center infrastructure based on proven commonly deployed designs.

  3. FabricPath Design :: 2 SPINE (Routing at Aggregation) FabricPath Configuration • Active / Active gateways (via vPC+ or Anycast HSRP) • VLAN anywhere (no trunk ports) • Option for vPC+ for legacy access switches and computer connectivity • Easily deploy L4-7 services • Simplest design option :: traditional Aggregation / Access designs • Simplified configuration • Removal of STP • Traffic distribution over all uplinks without vPC port-channels Natural Evolution of the vPC Design

  4. FabricPath Design :: 4 SPINE (Routing at Aggregation w/ Anycast HSRP) FabricPath Configuration • Scale out; n-way Active HSRP in FabricPath (up to 4 today) • No longer need vPC+ at SPINE for active/active HSRP • No peer-link or peer-keepalive link required • Leaf software needs to understand Anycast HSRP in FabricPath

  5. FabricPath Design :: Dedicated SPINE (Centralized Routing) FabricPath Configuration

  6. Alternative View FabricPath Design :: Dedicated SPINE (Centralized Routing) FabricPath Configuration • Paradigm shift with respect to typical designs (CLOS Fabric topology) • Simplifies SPINE design • Traditional “Aggregation” layer becomes pure FabricPath SPINE • Design helps ensure that any application node are at most only two hops away • FabricPath LEAF switches provide server connectivity like traditional designs • FabricPath LEAF switches also provide L2/L3 boundary, inter-VLAN routing, North  South routing FabricPath Deployment in Preparation For Dynamic Fabric Automation (DFA)

  7. FabricPath Design :: Multi POD (w/ FP Multi-Topology) FabricPath Configuration NX-OS 6.2 • Where to place DC wide L2/L3 boundary (vPC+ or Anycast HSRP) • FabricPath Core • Pick a any Aggregation POD • Routed Sub-interfaces on Routed Core / WAN Edge via CE edge ports • Default topology always includes all FabricPath core ports • Map DC Wide VLANs to default topology • POD local core ports also mapped to POD local topology • Map POD local VLANs to POD local topology • Provides DC wide vs. POD local VLAN segmentation / isolation • Can support VLAN ID reuse in multiple PODs • Define FabricPath VLANs :: map VLANs to topology :: map topology to FabricPath core ports • Optional design for “disconnected” PODs • Each POD can use same non-default FP topology; don’t need FabricPath Core since each POD is on its own island

  8. FabricPath Terminology FabricPath Configuration

  9. FabricPath Encapsulation FabricPath Configuration

  10. Benefits Overview FabricPath Configuration • FabricPath is a next generation Layer 2 technology from Cisco that provides multi-path Ethernet capabilities in L2 switching networks. FabricPath combines the benefits of L2 switching such as easy configuration and workload flexibility with greater scalability and availability. Specifically, FabricPath adds to L2 switching some routing type capabilities such as all active links, fast convergence, and loop avoidance mechanisms in the data plane. It allows Layer 2 networking without Spanning Tree Protocol. • FabricPath provides the following benefits: • Eliminates Spanning Tree Protocol (STP) with built-in loop prevention and mitigation (TTL & RPF) • Single control plane for unknown unicast, unicast, broadcast, and multicast traffic • VLAN anywhere • FP is transparent to L3 protocols • Easy to configure • Easy to manage • Flexibility • Create arbitrary any topology • Multiple designs to integrate L2/L3 boundaries • Start small and expand as needed (bandwidth growth) • Efficient and Scalable • Layer 3 availability similar features • Leverage parallel paths • Expanding available bandwidth at L2/L3 Default Gateway level • MAC address table scale (conversational learning) :: all FabricPath VLANs use conversation MAC address learning • Fast Convergence and low latency • Enhances mobility and virtualization in the FabricPath network • Capable of running vPC (called vPC+) to connect devices to the edge in a port channel • Multi-tenant support, traffic engineering, meet security separation requirements via FabricPath topologies

  11. Feature Configuration FabricPath Configuration

  12. Feature Configuration FabricPath Configuration

  13. Install license bootflash:///enchanced_layer2_pkg.lic show license usage Initial Baseline (Only 4 Commands !!) FabricPath Configuration feature lacp install feature-set fabricpath feature-set fabricpath vlan 1 – 200 mode fabricpath interface po2 switchportmode fabricpath interface e3/1, e4/1 channel-group 2 mode active interface e5/1, e5/2 switchportmode fabricpath feature lacp install feature-set fabricpath feature-set fabricpath vlan 1 – 200 mode fabricpath interface po2 switchport mode fabricpath interface e3/1, e4/1 channel-group 2 mode active interface e5/1, e5/2 switchport mode fabricpath Default / Admin VDC Only Default / Admin VDC Only 7K-1 7K-2 3/1 3/1 4/1 4/1 Po2 Animation 5/1 5/2 5/2 5/1 feature lacp install feature-set fabricpath feature-set fabricpath vlan 1 – 200 mode fabricpath interface po2 switchportmode fabricpath interface e1/1, e1/2 channel-group 2 mode active interface e1/3, e1/4 switchportmode fabricpath feature lacp install feature-set fabricpath feature-set fabricpath vlan 1 – 200 mode fabricpath interface po2 switchportmode fabricpath interface e1/1, e1/2 channel-group 2 mode active interface e1/3, e1/4 switchportmode fabricpath 1/3 1/4 1/4 1/3 1/1 1/1 1/2 1/2 Po2 5K-1 5K-2 Step 1 :: install | validate Enhanced L2 License Step 2 :: install FabricPath Step 3:: enable FabricPath Step 4:: configure FabricPath VLANs Step 5:: configure FabricPath core ports Beginning with the Cisco NX-OS Release 5.1 and when you use an F Series modules and NX-OS Release 5.1(3) N1(1) with 5500 you can use the FabricPath feature

  14. Manually Set the FabricPath Switch-ID & Root FabricPath Configuration fabricpath switch-id 10 fabricpath domain default root-priority 255 fabricpath switch-id 11 fabricpath domain default root-priority 254 Root for FTAG 2 Root for FTAG 1 7K-1 7K-2 3/1 3/1 SW 10 SW 11 4/1 4/1 Po2 Animation 5/1 5/2 5/2 5/1 fabricpath switch-id 100 fabricpath switch-id 101 1/3 1/4 1/4 1/3 1/1 1/1 1/2 1/2 SW 100 SW 101 Po2 5K-1 5K-2 Multi destination Tree 1 (ftag 1) – broadcast, unknown unicast, multicast Multi destination Tree 2 (ftag 2) –multicast Recommend to use on SPINE switches Higher Number the better !! (start at 255 and go backwards) -or- (start at 200 in case you need to introduce another MDT at a later time; ie expanded SPINE x 4) Each peer devices will have a unique global switch ID value – make the FP network more deterministic Suggested switch ID scheme: SPINE :: 2 digit ID LEAF :: 3 digit ID Emulated Switch (vPC+) :: 4 digit ID Step 1 :: set the FP Switch-ID Step 2 :: set the FP Root F2/F2E uses both trees for UU/Bcast/Mcast F1 uses MDT 2 for Mcast only

  15. Manually Set the Spanning-Tree :: Single Virtual Root Bridge) FabricPath Configuration vlan 1 – 200 mode fabricpath spanning-tree pseudo-information vlan 1 – 200 root priority 0 vlan1 – 200 mode fabricpath spanning-tree pseudo-information vlan 1 – 200 root priority 0 optional optional 7K-1 7K-2 3/1 3/1 4/1 4/1 vlan 1 – 200 mode fabricpath spanning-tree pseudo-information vlan 1 – 200 root priority 0 vlan 1 – 200 mode fabricpath spanning-tree pseudo-information vlan 1 – 200 root priority 0 Po2 Animation 5/1 5/2 5/2 5/1 1/3 1/4 1/4 1/3 The entire FabricPath domain will look like one virtual bridge to the CE domain – set best (lowest) STP root priority on the vPC+ peers (recommended at least at the access edge leaf switches); just make sure the priority is lower than anything else in the network (classical Ethernet) FP will use the same bridge ID c84c.75fa.6000 1/1 1/1 1/2 1/2 vlan20, 40 spanning-tree vlan20, 40 priority 8192 Po2 5K-1 5K-2 Note that the spanning-tree priority command would also work; however, it would change the priority for the spanning tree regardless of whether the switch were sending regular BPDUs (when Cisco FabricPath is not running) or sending BPDUs with the pseudo-information (when Cisco FabricPath is operational on the switch). In some scenarios, this change can have undesirable side effects. The root and sender bridge MAC addresses of this pseudo-information are the same on every switch in the Cisco FabricPath domain All ports at the edge of a Cisco FabricPath network are configured with the equivalent of root guard (don’t need to configure this feature), a feature that would block a port should it receive superior Spanning Tree Protocol BPDUs Step 1 :: set FP domain to be root bridge

  16. Tune Timers for Fast Convergence FabricPath Configuration fabricpathdomain default spf-interval 50 50 50 lsp-gen-interval 50 50 50 fabricpath timers linkup-delay 60 fabricpath domain default spf-interval 50 50 50 lsp-gen-interval 50 50 50 fabricpath timers linkup-delay 60 7K-1 7K-2 3/1 3/1 4/1 4/1 Po2 Animation 5/1 5/2 5/2 5/1 fabricpath domain default spf-interval 50 50 50 lsp-gen-interval 50 50 50 fabricpath timers linkup-delay 60 fabricpath domain default spf-interval 50 50 50 lsp-gen-interval 50 50 50 fabricpath timers linkup-delay 60 1/3 1/4 1/4 1/3 1/1 1/1 Problem Set: The IS-IS adjacency is established and the access-edge started sending traffic to aggregation-edge, but the control plane was not ready to forward the traffic to the next hop. The default spf and lsp-gen intervals are 8sec (default) and it attributes to the long convergence. To address this issue, the default spf and lsp-gen intervals of {max-wait, initial-wait, second-wait} are brought down to 50msec, with this configuration, the aggregation-edge restoration will yield sub-second convergence for Layer 2 traffic Note: Future enhancements such as Layer 2 IS-IS overload bit support in 6.2 will help to improve unicast and multicast convergence during FabricPath node failure scenarios when default IS-IS timers are used. 1/2 1/2 Po2 5K-1 5K-2 Step 1 :: tune the IS-IS timers in FabricPath Step 2 :: (optional) tune the FabricPath linkup-delay To achieve fast convergence during node failures and recovery scenarios, it is recommended to tune the IS-IS timers in Cisco FabricPath. This tuning is particularly important when a switch is inserted in the topology. This configuration is recommended for all switches in the network Optional, to provide better network convergence upon a Cisco FabricPath switch restart, you should set a Cisco FabricPath linkup-delay timer to 60

  17. Enable vPC+ :: Dual Attachment & Active/Active HSRP FabricPath Configuration feature vpc vpcdomain 1 role priority 2 peer-keepalivedestination [….] source [….] …. iparp synchronize fabricpath switch-id 1000 dual-active exclude interface vlan 20 interface po2 switchport mode fabricpath vpc peer-link feature vpc vpcdomain 1 role priority 1 peer-keepalivedestination [….] source [….] …. iparpsynchronize fabricpath switch-id 1000 dual-active exclude interface vlan 20 interface po2 switchport mode fabricpath vpc peer-link vPC+ 7K-1 7K-2 SW 1000 SW 1000 3/1 3/1 4/1 4/1 Po2 Animation 5/1 5/2 5/2 5/1 With vPC+, a FabricPath switch is emulated between the CE and FabricPath domain. All packets originating behind the Emulated Switch will be marked with the source Switch ID of the emulated switch Assign the same emulated switch ID on both vPC peers; but the emulated switch ID must be unique between different vPC domains vPC+ is an extension of vPC for FabricPath. It allows dual-homed connections from Classical Ethernet (CE) switches and hosts capable of port channels. It also provides for active-active HSRP. The configuration of peer-link and peer-keepalive links are required – as traditional vPC Enable IP ARP Synchronization of ARP entries between vPC Peers improves convergence for North-South and East-West Layer 3 traffic when one of the vPC+ peers is brought back up 1/3 1/4 1/4 1/3 1/1 1/1 1/2 1/2 Po2 5K-1 5K-2 In a vPC environment, the Secondary vPC switch will bring down the SVIs by default when the peer-link is brought down. This behavior is fine in CE environment as the vPC legs are also brought down on the secondary vPC switch. However in the vPC+ environment, the down links to the Access-Edge switches are FabricPath Core ports; in the absence of the vPC+ peer-link, the SVIs can still communicate through the FabricPath core ports. The vPC dual-active exclude vlancommand helps to configure a VLAN list such that the SVI can continue to stay up on the secondary vPC switch even if the vPC+ peer-link is down. Step 1 :: enable vPC+ Step 2 :: set the emulated switch-id Step 3 :: enable dual-active exclude for vPC SVIs Note: Since FabricPath does not rely on Spanning Tree Protocol, and the vPC+ peer link is a FabricPath Core port, so the peer-switchcommand is not neededunder the vpc domain [x] configuration

  18. Note: In a FabricPath vPC+ environment both HSRP peers are actively forwarding, there is no need to configure preemption, different priorities, and fast hello timers. Enable vPC+ :: Active/Active HSRP @ SPINE (Full Configuration) FabricPath Configuration feature interface-vlan feature hsrp feature lacp feature vpc vlan1 – 200 mode fabricpath spanning-tree pseudo-information vlan 1 – 200 root priority 0 ------------------------ vpc domain 1 role priority 1 system-priority 4096 peer-keepalive destination [….] source [….] peer-gateway auto-recovery auto-recovery reload-delay delay restore 30 iparpsynchronize fabricpath switch-id 1000 dual-active exclude interface vlan 20 interface po2 switchport mode fabricpath vpc peer-link interface e3/1, e4/1 channel-group 2 mode active feature interface-vlan feature hsrp feature lacp feature vpc vlan 1 – 200 mode fabricpath spanning-tree pseudo-information vlan 1 – 200 root priority 0 ------------------------ vpc domain 1 role priority 2 system-priority 4096 peer-keepalive destination [….] source [….] peer-gateway auto-recovery auto-recovery reload-delay delay restore 30 iparp synchronize fabricpath switch-id 1000 dual-active exclude interface vlan 20 interface po2 switchport mode fabricpath vpc peer-link interface e3/1, e4/1 channel-group 2 mode active vPC+ 7K-1 7K-2 SW 1000 SW 1000 3/1 3/1 4/1 4/1 Po2 Animation 5/1 5/2 5/2 5/1 1/3 1/4 1/4 1/3 1/1 1/1 1/2 1/2 Po2 5K-1 5K-2 Step 1 :: enable vPC+ Step 2 :: set the emulated switch-id Step 3 :: enable dual-active exclude for vPC+ SVIs interface vlan20 ip address 20.20.20.5/24 no ip redirect hsrp20 ip20.20.20.254 interface vlan20 ip address 20.20.20.6/24 no ip redirect hsrp20 ip 20.20.20.254

  19. Enable vPC+ :: Dual Attachment @ LEAF FabricPath Configuration feature lacp feature vpc vlan1 – 200 mode fabricpath spanning-tree pseudo-information vlan 1 – 200 root priority 0 ------------------------ vpc domain 10 role priority 1 peer-keepalive destination [….] source [….] …. iparp synchronize fabricpath switch-id 1001 interface po2 switchport mode fabricpath vpc peer-link interface e1/1, e1/2 channel-group 2 mode active interface port-channel 20 switchport switchport mode trunk switchport trunk allowed vlan20 – 40 vpc20 interface e1/5 channel-group 20 force mode active feature lacp feature vpc vlan 1 – 200 mode fabricpath spanning-tree pseudo-information vlan 1 – 200 root priority 0 ------------------------ vpc domain 10 role priority 2 peer-keepalive destination [….] source [….] …. iparp synchronize fabricpath switch-id 1001 interface po2 switchport mode fabricpath vpc peer-link interface e1/1, e1/2 channel-group 2 mode active interface port-channel 20 switchport switchport mode trunk switchport trunk allowed vlan 20 – 40 vpc 20 interface e1/5 channel-group 20 force mode active vPC+ 7K-1 7K-2 SW 1000 SW 1000 3/1 3/1 4/1 4/1 Po2 Animation 5/1 5/2 5/2 5/1 vPC+ SW 1000 SW 1001 1/3 1/4 1/4 1/3 1/1 1/1 1/2 1/2 1/5 1/5 Po2 5K-1 5K-2 vPC 20 Step 1 :: enable vPC+ Step 2 :: set the emulated switch-id Step 3 :: add devices redundantly with vPC+ VLANs carried on vPC+ member ports must be FabricPath mode VLANs

  20. FabricPath Authentication FabricPath Configuration • interface port-channel2 • switchport mode fabricpath • fabricpathisis authentication-type md5 • fabricpathisis authentication key-chain FP-KEYS • fabricpath domain default • authentication-type md5 • authentication key-chain FP-KEYS • key chain FP-KEYS • key 0 • key-string Cisc0! • accept-lifetime 00:00:00 Sep 1 2012 infinite • send-lifetime 00:00:00 Sep 1 2012 infinite • interface port-channel2 • switchport mode fabricpath • fabricpathisis authentication-type md5 • fabricpathisis authentication key-chain FP-KEYS • fabricpath domain default • authentication-type md5 • authentication key-chain FP-KEYS • key chain FP-KEYS • key 0 • key-string Cisc0! • accept-lifetime 00:00:00 Sep 12012 infinite • send-lifetime 00:00:00 Sep 12012 infinite 7K-1 7K-2 3/1 3/1 4/1 4/1 Po2 Animation 5/1 5/2 5/2 5/1 1/3 1/4 1/4 1/3 global level authentication :: authenticates and controls the FP LSPs and PSNPs interfaces level authentication :: authenticates the HELLO; the FP ISIS adjacency FabricPath provides 2 levels of authentication Authentication at the interfaces level Authentication at the global level The Key chain is used in both forms of authentication Supported combinations: 1/1 1/1 1/2 1/2 Po2 5K-1 5K-2 Step 1 :: configure the key chain Step 2 :: configure global FabricPath authentication Step 3 :: configure FabricPath core port authentication You can configure the accept lifetime and send lifetime for a key. By default, accept and send lifetimes for a key are infinite, which means that the key is always valid. accept-lifetime [local] start-timedurationduration-value| infinite | end-time] send-lifetime [local] start-time durationduration-value| infinite | end-time]

  21. NX-OS 6.2 AnyCast HSRP FabricPath Configuration 7K-1 7K-2 3/1 3/1 4/1 4/1 Po2 Animation 5/1 5/2 5/2 5/1 1/3 1/4 1/4 1/3 1/1 1/1 1/2 1/2 Po2 5K-1 5K-2 Step 1 :: Step 2 :: Step 3 :: Step 4 :: Step 5 ::

  22. NX-OS 6.2 Overload Bit FabricPath Configuration 7K-1 7K-2 3/1 3/1 4/1 4/1 Po2 Animation 5/1 5/2 5/2 5/1 1/3 1/4 1/4 1/3 1/1 1/1 1/2 1/2 Po2 5K-1 5K-2 Step 1 :: Step 2 :: Step 3 :: Step 4 :: Step 5 ::

  23. NX-OS 6.2 Multiple Topologies & Multi-Destination Trees (MDT) FabricPath Configuration 7K-1 7K-2 3/1 3/1 4/1 4/1 Po2 Animation 5/1 5/2 5/2 5/1 1/3 1/4 1/4 1/3 1/1 1/1 1/2 1/2 Po2 5K-1 5K-2 Step 1 :: Step 2 :: Step 3 :: Step 4 :: Step 5 ::

  24. NX-OS 6.2 FabricPath Static Routes :: Traffic Engineering Use Cases FabricPath Configuration 7K-1 7K-2 3/1 3/1 4/1 4/1 Po2 Animation 5/1 5/2 5/2 5/1 1/3 1/4 1/4 1/3 1/1 1/1 1/2 1/2 Po2 5K-1 5K-2 Step 1 :: Step 2 :: Step 3 :: Step 4 :: Step 5 ::

  25. FabricPath is Easy & Simple !! FabricPath Configuration VPC Configuration interface e1/5 ip address 192.168.1.1/24 vrf membership vpc-keepalive vpc domain 1 peer-keepalive destination 192.168.1.2 source 192.168.1.1 vrfvpc-keepalive interface port-channel 1000 switchport mode trunk vpc peer-link interface e1/1-2 switchport mode trunk channel-group 1000 mode active interface e1/3 switchport mode trunk channel-group 1 mode active interface port-channel1 vpc1 interface e2/5 ip address 192.168.1.2/24 vrf membership vpc-keepalive vpc domain 1 peer-keepalive destination 192.168.1.1 source 192.168.1.2 vrfvpc-keepalive interface port-channel 1000 switchport mode trunk vpc peer-link interface e2/1-2 switchport mode trunk channel-group 1000 mode active interface e2/3 switchport mode trunk channel-group 1 mode active interface port-channel1 vpc1 e1/1-2 e2/1-2 e1/3 e2/3 FabricPath Configuration interface e2/1-3 switchport mode fabricpath interface e1/1-3 switchport mode fabricpath e3/1-2 FabricPath e1/5 e2/5 e1/1-2 e2/1-2 e1/3 e2/3 VPC e3/1-2 interface e3/1-2 switchport mode fabricpath interface e3/1-2 switchport mode trunk channel-group 1 mode passive

  26. vPC to FabricPath Migration Common Design Migration Starting Point 7k – Aggregation 5k/2k – Access Pods Dual Layer vPC Mix F1 / M1 line cards After Migration Completion 7k – SPINE role 5k – LEAF role vPC converted to FabricPath core ports Peer-Link also FP core port = vPC+ (only F1/F2 support FabricPath) Additional Reading Here :: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-709336.html

  27. Strong Recommendations and Key Notes FabricPath Configuration • FabricPath VLANs must be configured on all switches in the FP domain • It is recommended to configure the switch ID manually on all FabricPath switches • For Active-Active HSRP capability, it is recommended to configure vPC+ on the Aggregation-Edge switches even if there are no vPC legs. Note: subject to vPC rules; so no dynamic routing over vPC to firewalls, Core layer, WAN edge • Implement Layer 3 routing backup path • Separate L3 port channel; point-to-point links • Separate L2 port channel; use dedicated VLAN in Classical Ethernet (CE) mode as transit VLAN inside this L2 trunk • Disable IP redirects on SVIs and configure passive interface to avoid any routing adjacency over SVIs • ARP sync feature with vPC+ is recommended for improved traffic convergence during Aggregation-Edge failure and restoration • It is recommended to configure highest and second highest MDT root priority on the Aggregation-Edge switches • Have option of choosing single links or port-channels between Aggregation-Edge and Access-Edge for ECMP. If port channels are used, configuring IS-IS metric is preferred. With path costing, member link failure is transparent to IS-IS protocol so that the traffic would continue to use the same path • Raise the FP IS-IS metric for VPC+ Peer-Link to prefer other FP core links • interface po2 • fabricpathisis metric 200

  28. Strong Recommendations and Key Notes FabricPath Configuration • It is recommended to have lowest path cost for the links between AGG devices so the multicast hello packets always take the peer-link which is direct link between the AGG devices • It is recommended to tune Layer 2 IS-IS SPF and LSP generation timers to achieve better convergence during failure and restoration scenarios. These timers should be tuned to 50 msec with 50msec initial wait and second wait. This is a requirement until the overload bit support is available with Layer 2 IS-IS • Use default reference BW (its 400 Gbps default) • fabricpath domain default • reference-bandwidth ? • IS-IS metric cost (1Gb = cost 400, 10Gb = cost 40, 20Gb = cost 20) • IS-IS link metric for port-channel depends on NX-OS version • Up to NX-OS 6.0: IS-IS metric for port-channel is calculated based on number of configured member ports; meaning you may need to use LACP min-link feature to tear down port-channel if number of active member ports goes below a specific limit • Since NX-OS 6.1: IS-IS metric for port-channel is calculated based on number of active ports • Dual-active exclude VLAN configuration is recommended so that the SVIs can continue to be active on the secondary vPC+ peer in the event of peer-link failure. This also helps to stay with default HSRP timers there by reducing control plane load associated with aggressive HSRP timers • Do not use dual-active exclude command for VLANs if you have vPC attached devices, for example at the access (leaf)

  29. Strong Recommendations and Key Notes FabricPath Configuration • In typical vPC deployments it is not necessary to tune the HSRP hello timers from the defaults (3/10s). In a FabricPath deployment it is recommended to use aggressive timers (1/3) to minimize flooding of South to North traffic from the edge switch. This allows the active HSRP virtual MAC to be learned faster at all edge switches • hsrp 1 • preempt delay minimum 180 • timers 1 3 • ip …. • In CE-FabricPath hybrid networks, it is recommended to configure the lowest Spanning-tree root priority on all FabricPath Edge switches • The MAC timer should be consistent on all devices in the Layer 2 topology. The MAC and ARP aging timers can be left at defaults, 1800sec & 1500sec respectively • The M1/F1 mixed VDC currently supports up to 16K MAC/ARP entries. This limitation will be lifted with the Layer 2 proxy learning feature in the upcoming NX-OS release • The M1, M1-XL, M2 & F2E in a mixed VDC topology; meaning when F2E is placed in a chassis with M-series it will operate in Layer 2 mode only leveraging the M for Layer 3 (proxy L3 forwarding); this will enable 128K MAC/ARP scale. • If an ASA cluster is attached to the Nexus 7000 series Aggregation-edge switches, source-dest-ip or src-dst ip-l4port is the recommended load balance algorithm if the ASA cluster is in single context mode or if the VLANs are fewer in multi-context mode. This is to prevent traffic polarization on links towards ASA cluster member

  30. Strong Recommendations and Key Notes FabricPath Configuration • Better use port-channel instead of individual links for the 2 following reasons • Decrease the number of direct IS-IS adjacency (1 for the whole port-channel instead of X IS-IS adjacencies if X individual links are used between the 2 switches) • Allows to use the whole port-channel capacity for multidestination tree #1 or #2 (if multiple parallel individual links exist between 2 switches, only 1 link will be selected for tree #1 and potentially 1 another link for tree #2) • ECMP vs. Port Channel • Can use ECMP, port-channel, or both simultaneously • Port-channels have one main advantage over ECMP – treated as single logical link in FabricPath IS-IS. Individual link failure invisible to upper layer protocols. Also allows more bandwidth for branches of Multidestination trees • With 4 member port channel, whole interface becomes single branch of tree with 40G BW • With 4 parallel ECMP paths, only one of the 4 interfaces becomes part of the tree • ECMP with port-channel : 2 levels of load-balancing decision : • First level : FP Core Link selection (based on L3/L4 fields by default) • Second level : Port-Channel member selection (based on src-dstip by default)

  31. Strong Recommendations and Key Notes FabricPath Configuration • Do not use UDLD with FabricPath • UDLD (normal or aggressive) does not bring any benefits on single physical link and port channels with FP enabled (for port channel, activate LACP instead of relying on UDLD to detect member port issues) • Physical link level protection and the bi-directional IS-IS hellos should take care of all (or near all) potentially link level issue • HSRP preemption does not add any value but may hurt at large VLAN scale, when you need to maintain HSRP adjacency for each of the VLANs. Control plane will just be burning cycles with no efficient and positive impact on data path. Consider not using HSRP preemption in the FabricPath design.

  32. Building FabricPath Routing Tables :: Control Plane Operation FabricPath Configuration FabricPath Routing Table on S11 FabricPath Routing Table on S10 FabricPath Routing Table on S100 FabricPath Routing Table on S140 Step 1 :: Enable FabricPath on desired interfaces Step 2 :: L2 IS-IS hello are sent out on all FabricPath Ports Step 3 :: Establish L2 IS-IS Adjacency Step 4 :: Send L2 IS-IS updates to exchange local link-states Step 5 :: All FabricPath switches calculate unicast paths to all other switches in the L2 fabric and create the ‘FabricPath Routing Table’ based on the results • Forwarding path selection based on destination Switch-ID • Switch Table basically contains (Switch-ID, Output Interface) • Up to 16 ‘Next-Hop’ Interfaces (ECMP) per Switch-ID

  33. DSID→FFFtag→1 DSID→FFFtag→1 SSID→100 SSID→100 DMAC→FF DMAC→FF SMAC→A SMAC→A Payload Payload DMAC→FF DMAC→FF SMAC→A SMAC→A Payload Payload FabricPath Forwarding :: Broadcast (ARP Request) FabricPath Configuration Multidestination Trees on S10 Root for Tree 1 Root for Tree 2 FTAG/tree 2 handles multicast only FTAG/tree 1 handles unknown unicast, broadcast and some multicast encap decap ftag ftag Multidestination Trees on S100 Multidestination Trees on S140 Broadcast FabricPath MAC Table on S140 FabricPath MAC Table on S100 Step 1 :: Host A communicates to Host B for the first time – Sends ARP request to B Step 2 :: S100 adds A into MAC table as the result of new source learning on CE port Step 3 :: Since destination MAC is all ‘F’; S100 floods this frame out all CE ports [Learn MACs of directly-connected devices unconditionally] Step 4 :: Meanwhile, S100 selects ‘Tree 1’, marks this in the FabricPath header and floods this frame out all FabricPath ports (L1, L2) that are part of Tree 1 Step 5 :: S10 floods this frame further, out (L3, L5) based on local info about Tree 1 Step 6 :: S101 and S140 remove the FabricPath header and flood the frame out all local CE ports. Don’t Learn Remote MAC since DMAC is unknown / is a Flooded Frame

  34. DSID→MC1Ftag→1 DSID→MC1Ftag→1 SSID→140 SSID→140 DMAC→A DMAC→A SMAC→B SMAC→B Payload Payload DMAC→A DMAC→A SMAC→B SMAC→B Payload Payload FabricPath Forwarding :: Unknown Unicast (ARP Reply) FabricPath Configuration Multidestination Trees on S10 Root for Tree 1 Root for Tree 2 FTAG/tree 2 handles multicast only FTAG/tree 1 handles unknown unicast, broadcast and some multicast decap encap ftag Unknown Multidestination Trees on S100 Multidestination Trees on S140 ftag FabricPath MAC Table on S140 FabricPath MAC Table on S100 Step 1 :: Host B sends ARP Reply back to Host A Step 2 :: S140 adds B into the MAC Table from source learning on CE port Step 3 :: Since A is unknown, S140 floods the frame out all CE ports Step 4 :: Meanwhile, S140 selects Tree 1, marks this in the FabricPath header and floods this frame out all FabricPath ports (L5) that are part of Tree 1 Step 5 :: S10 floods this frame further (L1, L3) along Tree 1 Step 6 :: S100 floods this frame further (L2) along Tree 1. Also, upon removing the FabricPath header, S100 finds host A was learned locally. Therefore adds B to the MAC Table as remote, associated with S140 A  MAC A is Unknown If DMAC is Known then Learn Remote MAC

  35. DSID→140Ftag→1 DSID→140Ftag→1 SSID→100 SSID→100 DMAC→B DMAC→B SMAC→A SMAC→A Payload Payload DMAC→B DMAC→B SMAC→A SMAC→A Payload Payload FabricPath Forwarding :: Known Unicast (Data) FabricPath Configuration FabricPath Routing Table on S11 Destination Switch ID is used to make routing decisions through the FabricPath core & no MAC learning or lookups required inside the FP core encap decap FabricPath Routing Table on S100 FabricPath Routing Table on S140 Hash L1,L2 FabricPath MAC Table on S100 FabricPath MAC Table on S140 Step 1 :: Host A starts sending traffic to Host B after ARP resolution Step 2 :: S100 finds B was learned as remote; associated with S140, encap all subsequent frames to B with S140 as destination in FP header Step 3 :: S100 Routing Table indicates multiple paths to S140; runs ECMP hash and this time S100 selects L2 as next-hop Step 4 :: Routing Table lookup at S11 indicates L6 as next hop for S140 Step 5 :: S140 finds itself as destination in FabricPath header and B is also known locally; decaps FP header, adds A as remote & associates with S100

  36. FabricPath Loop Mitigation FabricPath Configuration TTL=1 TTL=2 When the frame is originally encapsulated, the system sets the TTL to 32; on each hop through the FabricPath network, each switch decrements the TTL by 1. If the TTL reaches 0, that frame is discarded. This feature prevents the continuation of any loops that may form in the network. TTL=0 TTL=3 Loop prevention and mitigation is available in the data plane, helping ensure safe forwarding unmatched by any transparent bridging technology. Cisco FabricPath frames include a time-to-live (TTL) field similar to the one used in IP, and an applied reverse-path forwarding (RPF) check for multicast based on ‘Tree’ information

  37. Mixed Chassis Mode :: Supported Topologies FabricPath Configuration Interop F2 & F2E VDC • With NX-OS 6.1 and Prior Releases :: • Alwaysuse identical line cards on either side of the vPC+ Peer Link, vPC member ports, and FabricPath core member ports (legs to downstream device) • The F1-series line cards can mix with M-series line cards • The F2-series line cards have to be in their own VDC; VDC type [F2] meaning they can’t mix with F1 or the M-series in the same VDC

  38. Mixed Chassis Mode :: Supported Topologies FabricPath Configuration • Starting in NX-OS 6.2 and Later Releases :: • VDC type [F2, F2E, F2 F2E] must match between the 2 vPC+ peer devices when F2 & F2E are used in same VDC; meaning its ok to have F2 on vPC peer device 1 and F2E on vPC peer device 2 for the vPC Peer Link, vPC member ports, or FabricPath core member ports • Note: in a F2 & F2E type of design; only features related to F2 apply (lowest common denominator) • Alwaysuse identical line cards on either side of the vPC Peer Link, vPC member ports, and FabricPath core member ports when M1, M1-XL, M2 & F2E in same VDC [M-F2E] or system • When F2E is placed in a chassis with M-series it will operate in Layer 2 mode only leveraging the M for Layer 3 (proxy L3 forwarding); this will provide 128K MAC scale

  39. FabricPath vs. TRILL

  40. Additional Resources & Further Reading FabricPath Configuration External (public) Cisco FabricPath Best Practices http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c07-728188.pdf Scale Data Centers with Cisco FabricPath http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-605488.html Cisco FabricPath for Cisco Nexus 7000 Series Switches http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-687554.html Nexus 7000/6000/5000 Configuration Guides http://www.cisco.com/en/US/products/ps9402/products_installation_and_configuration_guides_list.html http://www.cisco.com/en/US/products/ps9670/products_installation_and_configuration_guides_list.html http://www.cisco.com/en/US/partner/products/ps12806/products_installation_and_configuration_guides_list.html FabricPath Scaling limits http://www.cisco.com/en/US/docs/switches/datacenter/sw/verified_scalability/b_Cisco_Nexus_7000_Series_NX-OS_Verified_Scalability_Guide.html#reference_3AD0536C32FF4B499A0936409729951D http://www.cisco.com/en/US/docs/switches/datacenter/nexus5500/sw/configuration_limits/b_N5500_Config_Limits_602N11_chapter_01.html Great External Resources

  41. Additional Resources & Further Reading FabricPath Configuration Quick Start Guide :: Virtual Port Channel (vPC) https://communities.cisco.com/docs/DOC-35728

More Related