440 likes | 500 Views
Workshop della Commissione Calcolo e Reti dell'INFN Data Center Networking Genova 28 Maggio 2013. Fabio Bellini Network Sales Engineer EMEA WER +39 335 7781550 Fabio_Bellini@dell.com. Active Fabric Solutions. Multiple chassis Fully redundant meshed fabric L2/L3 Multipath active
E N D
Workshop della Commissione Calcolo e Reti dell'INFNData Center NetworkingGenova 28 Maggio 2013 Fabio Bellini Network Sales Engineer EMEA WER +39 335 7781550 Fabio_Bellini@dell.com
Multiple chassis Fully redundant meshed fabric L2/L3 Multipath active Spanning Tree Free Architecture Scaling out next generation DC infrastructure Networking intra e inter Data Centers What is Active Fabric ? Active Fabric is a family of high-performance, cost-effective, inter-connect products purpose-built for stitching together server, storage and software elements in virtualized and cloud data centers.
Networking that offers choice and flexibility Distributed Servers Main Frame Distributed Cores Chassis Core
Traditional Network Design: Introduction Layer 3 Core Layer 2 or 3 Aggregation Layer 2 Access
Traditional Network Design: Introduction Layer 3 Core Layer 2 or 3 Aggregation Layer 2 Access VRRP removes ½ of uplink bandwidth Spanning Tree disables ½ of the uplinks Confidential 6
Traditional Network Design vs Active Fabric L2/L3 L2 (2 x) Layer 3 Core Spine Spine Layer 2 or 3 Aggregation (16 x) Leaf Leaf Leaf Leaf Layer 2 768 Server ports Access
Scale-out Layer 3 Leaf/Spine Fabric L3 L2 (16) (8) (2) Spine Spine Spine Spine Spine Spine Dell (128) Fabric Manager (64) Leaf Leaf Leaf Leaf Leaf Leaf Leaf Leaf (16) • Design Templates • Automate documentation • Automate config& deployment • Validate deployment 1980 Server ports • Deployment • Validation • Expansion & Changes 6144 Server ports 3072 Server ports 768 Server ports Fabric Design Documentation CLI Configuration
Active Fabric SolutionsLayer 3 Network Design - TODAY L3 L2 Spine Layer Leaf Layer • Implement a Layer 3 Protocol • OSPF • IS-IS • BGP (plus an IGP) • No Spanning Tree needed • Full bandwidth usage via Equal Cost Multipath (ECMP) • Fast link layer failover via Bi-Directional Forwarding Detection (BFD)
Active Fabric SolutionsLayer 3 Network Design - FUTURE L3 L2 L3 L3 Spine Layer NVO Gateway Leaf Layer Servers Servers Servers Servers • NVO – Network Virtualization Overlay • VXLAN for VMWare • NVGRE for Microsoft Hyper-V
Virtual Layer 2 Segment ID 20 Spine Segment ID 10 Leaf VM VM VM VM VM VM VM VM vSwitch vSwitch Host Host Network Virtualization Overlay Tenant subnet = Software VLAN Confidential
Active Fabric SolutionsLayer 3 Network Design - FUTURE L3 L2 L3 L3 Spine Layer Layer 2 Overlay NVO Gateway Leaf Layer Servers Servers Servers Servers • NVO – Network Virtualization Overlay • Use existing L3 Active Fabric technology we have TODAY • Build a virtual L2 infrastructure on top of it • Hypervisors and Gateway operate together • Virtual Servers believe they are on a L2 network
Active Fabric SolutionsLayer 2 Network Design - TODAY L2 L2 L2/L3 L2 VLT Spine Layer LAG LACP Leaf Layer • VLT = Virtual Link Trunk Multi chassis LAG • Dual Control Plane L2/L3 Active Active • Multi path Standard 802.3ad LAG (LACP) • Spanning Tree Free Fast convergence relay on LACP • Node Redundancy No SPOF from access to core • Scale out via mVLT (multiple VLT) Scale based on product selection • New products, higher port densities Improve scalability
Virtual Link Trunking (VLT)The key to our Layer 2 Active Fabric L2 L2 L2/L3 L2 VLTi Spine Layer LAG LACP Leaf Layer VLTi VLTi Rack Server Access Switch Blade Server Storage iSCSI
Active Fabric SolutionsLayer 2 Network Design – Converged - VLT Spine Layer Leaf Layer • Converged Switches at the Leaf Layer - Ethernet/FCoE/FC-
Active Fabric SolutionsLayer 2 Network Design – Converged - Spine Layer VLT SAN Fabric FC or FCOE iSCSI Leaf Layer • Converged Switches at the Leaf Layer - Ethernet/FCoE/FC- • Unified Storage Capabilities into the design – iSCSI/FC/FCoE
Active Fabric SolutionsLayer 2 Network Design – Converged - Spine Layer VLT SAN Fabric FC or FCOE iSCSI Leaf Layer • Converged Switches at the Leaf Layer - Ethernet/FCoE/FC- • Unified Storage Capabilities into the design – iSCSI/FC/FCoE • If you need dense 40Gb and DCB for iSCSI at the Spine…
Active Fabric SolutionsLayer 2/3 Network Design – Scale out Server Farm - VLT Spine Layer LAG LACP Up to 240G Leaf Layer • Scale out Server Farm – Blade switch - Ethernet/FCoE/FC • Scale out computational density without compromising • Half the infrastructure costs (chassis & switches) • Reduced cabling to ToR switches • Lower power per node
Active Fabric SolutionsLayer 2/3 Network Design – Scale out Server Farm - VLT Spine Layer Leaf Layer
Active Fabric SolutionsLayer 2/3 Network Design – Scale out Server Farm - VLT Spine Layer Leaf Layer
Active Fabric SolutionsLayer 2/3 Network Design – Scale out Server Farm - VLT Spine Layer LAG LACP Up to 240G Leaf Layer OS teorico 1,3:1 Operativo 1:1 VLT Domain VLT Domain VLT Domain VLT Domain LAG LACP 2 x 10G LAG LACP 2 x 10G
Active Fabric SolutionsLayer 2/3 Network Design – Scale out Server Farm - L2 L2 L2/L3 L2 VLTi Spine Layer LAG LACP Leaf Layer VLTi VLTi LAG LACP Up to 240G LAG LACP Up to 240G 32 Blade 64 Cpu 512 Core . . . . . . . . . . . . Thousands of Server/VM
Active Fabric ingredientsMaximum functionality, maximum programmability Layer 3 Multipath Converged LAN/SAN Layer 2 Multipath Software Programmability LAN SAN SAN Ethernet/iSCSI/ FCoE/FC OpenFlow REST/XML/Perl, Python VLT/mVLT OSPF/BGP/IS-IS • Z9000 • 4810 • 4820T • MXL/IOA • S5000 • Z9000 • 4810 • 4820T • MXL • S5000 • Z9000 • 4810 • 4820T • MXL/IOA (9.2) • S5000 • 4810 • 4820T • MXL/IOA • S5000
Dell Networking S4810 switchSFP+ 10/40G top-of-rack switch Proven 10/40G top-of-rack performance Dell Force10 S4810 Works with: + Better together will Dell servers & storage • Low-latency 10/40 GbE 64x10GbE or 48x10GbE + 4x40GbE • Layer 2 multipathing (VLT) support Stacking (to 6 nodes) • DCB support, Equallogic and Compellent certified • Built-in automation support (bare metal provisioning, scripting, programmatic management) • Built-in virtualization support(VMware, Citrix)
Dell Networking S4820T switch1/10/40G 10GBase-T top-of-rack switch Accelerate 1G to 10G migration Dell Force10 S4820T Works with: + Better together will Dell servers & storage • Fully-featured FTOS-powered top-of-rack switch • 48 x 1/10G 10GBase-T ports • 4 x 40G fabric uplinks (or 16 x 10G) • Built-in virtualization support(VMware, Citrix) • DCB support for SAN/LAN convergence (iSCSI, FCoE) • Integrated automation, scripting and programmatic management
NEW! Dell Networking S5000 converged LAN/SAN switch First-of-its-kind modular 1RU top-of-rack and fabric switch NEW! Dell Networking S5000 1.5X higher port density/RU than Cisco Nexus 5548 3X higher port density/RU than Brocade VDX 6720-24 • Pay-as-you-grow, customizable modularity powered by FTOS • 10GbE; 40GbE • 2, 4, 8G Fibre Channel • Future-proof, multi-stage design for next-gen I/O without rip & replace • Unified storage networking, with complete support for iSCSI, RoCE, and FCoE with FC fabric services • Reduced management complexity, Integrated automation, scripting and software programmability • Easy integration, strong interoperability with major adapter, switch, & storage solutions Confidential
Dell Networking Z9000High-density 10/40G fabric switch Dell Force10 Z9000 Read the report Distributed Core Architecture www.force10networks.com/tollyreport Product of the year award Internet Telephony Scaling the data center core up, down and out • 2.5Tbps in 2RU footprint • High-density networking • 32 line rate 40GbE or • 128 line rate 10GbE • Low power consumption • 800 Watts Max(6.25W per 10GbE) • 600 Watts Typical(4.68W per 10GbE)
Dell Networking Blade MXL HardwareHigh performance full-featured 1/10/40GbE Layer 2 & Layer 3 switch blade Flex I/O • 4-port SFP+ module • 1GbE & 10GbE ports • 10GbE optical & DAC copper twin-ax • 4-port 10GBASE-T module • 2X more than M8024-k • 1GbE & 10GbE ports • 2-port QSFP+ module • 2 x 40GbE ports • 10GbE support using breakout cables • Robust and scalable I/O performance, low latency and high bandwidth • Support for native 40GbE ports • Open standards-based feature rich Enterprise FTOS • Converged Ethernet and Fiber Channel support 40GbE QSFP+ Transceiver & Cables • 40GbE QSFP+ transceivers • 40GbE QSFP+ to 4xSFP+ direct attach breakout cable • 40GbE QSFP+ direct attach cable
Active FabricDesign Options • Fabrics for any size data center Build an Active Fabric that fits your needs Spine Leaf 29
Active Fabric key take away Interface Supports a various Interface types : Copper : 100/1000/10000 Base-T Fiber : 1G, 10G, 40G Data Supports different types of data on the same Fabric: Ethernet, FCoE, ISCSI Growth Active Fabric and grow from 10s to 100,000s end-devices with same type of equipment models
Ethernet 40/100G 802.3ba-2010 • Ieee 802.3 approved motions • 40 and 100Gbps • At least 100 mt on OM3 multimode fiber • At least 150 mt on OM4 multimode fiber • At least 10Km on single-mode fiber • At least 40Km on single-mode fiber (100G only) • At least 7 mt on copper cable assembly • At least 2Km on single-mode fiber (40G only) • Key project date: • Study group formed in July 2006 • Project authorization in December 2007 • Task Force in January 2008 • 40/100 Standard Compleated in July 2010 802.3ba-2010
40G eSR4 QSFP+ • QSFP+ eSR4 modules meets link distance specifications for 40G ethernet applications • 40G eSR4 paralle optics extended reach OM3/OM4: 300m / 400mt
Modules SFP/SFP+/QSFP and DAC cable Passive Twinax SFP+ DAC (7m) SFP/SFP+ QSFP+ MTP 40GE QSFP+ (50m) Active fiber 40GE QSFP+ (5m) Passive Copper AOC 40GE QSFP+ to 4 SFP+ (5m) passive copper breakout MTP to 4xLC optical breakout cable 5m + 100m/OM3 or 150m/OM4 DAC = Direct AttachedCable AOC = Active Optical Cable
Small Active Fabric Use Case: Customer has about 300 servers Needs to have High Availability as the Servers are used 7/24 Need the ability to have ISSU to support SLA of 99.999% uptime Scale: 48 servers per rack Redundant connections from servers 6 racks today; expansion to 20 2x20G uplink connection VLT Spine Layer Leaf Layer
Medium Active Fabric Use Case: An Enterprise Customer with HPC Pods Requires large amounts of Servers and Cores High availability uptime for Servers Large upstream pipe for data transfers Shrink the number of cables in the Data Center Scale: 10G to the servers Active-standby today A-A future 80G uplink connections NFS Storage System 4-M1000E chassis per rack=32 Chassis 16 Blades per M1000e chassis =512 Blades 12 cores per blade=6144 Cores VLT Spine Layer Leaf Layer
Large Active Fabric Use Case: Customer has a large L3 Network Requires 10,000s servers with growth Need to support smallest O.S possible 1G and 10G servers Scale: Capable to support 1G to 10G migration Start with 10,000 servers growing to 100,000 servers Expand with little or no hits Spine Layer Leaf Layer Customer Example:
Fabio Bellini Network Sales Engineer Mobile: +39 335 7781550 Email: fabio_bellini@dell.com