1 / 22

Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center

Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center. Joe Rabasca - Solutions Lead EMC Corporation. Objectives. After this session you will be able to: Understand FCoE and iSCSI and how they fit into existing storage and networking infrastructures.

bertha
Download Presentation

Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center Joe Rabasca - Solutions Lead EMC Corporation

  2. Objectives After this session you will be able to: • Understand FCoE and iSCSI and how they fit into existing storage and networking infrastructures. • Compare and contrast the structure and functionality of the FCoE and iSCSI protocol stacks. • Understand how FCoE and iSCSI solutions provide storage networking options for Ethernet, including 10 Gb Ethernet.

  3. iSCSI SAN 1 Gigabit Ethernet Fibre ChannelHBAs Ethernet Fibre Channel Fibre Channel SAN Storage Rack Server Environment Today • Servers connect to LAN, NAS and iSCSI SAN with NICs • Servers connect to FC SAN with HBAs • Many environments today are still 1 Gigabit Ethernet • Multiple server adapters, multiple cables, power and cooling costs • Storage is a separate network (including iSCSI) 1 Gigabit Ethernet Ethernet LAN 1 Gigabit EthernetNICs Note: NAS will continue to be part of the solution. Everywhere that you see Ethernet or 10Gb Ethernet in this presentation, NAS can be considered part of the unified storage solution Rack-mounted servers

  4. SAN LAN 10Gb Ethernet allows for Converged Data Center • Maturation of 10 Gigabit Ethernet • 10 Gigabit Ethernet allows replacement of n x 1Gb with a much smaller number (start with 2) of 10Gb Adapters • Many storage applications require > 1Gb bandwidth • 10 Gigabit Ethernet simplifies server, network and storage infrastructure • Reduces the number of cables and server adapters • Lowers capital expenditures and administrative costs • Reduces server power and cooling costs • Blade servers and server virtualization drive consolidated bandwidth Single Wire for Network and Storage 10 GbE 10 Gigabit Ethernet is the answer! iSCSI and FCoE both leverage this inflection point

  5. Why iSCSI? Initiator Target Delivery of iSCSI Protocol Data Unit (PDU) for SCSI functionality (initiator, target, data read / write, etc.) SCSI SCSI iSCSI iSCSI Reliable data transport and delivery (TCP Windows, ACKs, ordering, etc.) TCP TCP Provides IP routing (Layer 3) capability so packets can find their way through the network IP IP IPsec IPsec Provides physical network capability (Layer 2 Ethernet, Cat 5, MAC, etc.) Link Link IP Network

  6. Why a New Option for FC Customers? • FC has a large and well managed install base • Want a solution that is attractive for customers with FC expertise / investment • Previous convergence options did not allow for incremental adoption • Requirement for a Data Center solution that can provide I/O consolidation • 10 Gigabit Ethernet makes this option available • Leveraging Ethernet infrastructure and skill set has always been attractive FCoE allows an Ethernet-based SAN to be introduced into the FC-based Data Center withoutbreaking existing administrative tools and workflows

  7. App Applications SCSI SCSI FC FC FC FC Encapsulation Layer iSCSI FC FCIP TCP iFCP TCP TCP IP FCoE FCoE IP IP FC Base Transport Ethernet FC management Protocol Comparisons FC over Ethernet (no TCP/IP) App Applications SCSI SCSI FC SRP Encapsulation Layer iSCSI FC FCIP TCP iFCP TCP TCP IP IP IP Infiniband Base Transport Ethernet FC replication over IP Block storage with TCP/IP New transport and drivers Low latency, high bandwidth

  8. Server sees storage traffic as FC FCoE SW Stack Network Driver Standard 10G NIC Converged Network Adapter FC Driver Ethernet FC FCoE Extends FC on a Single Network Ethernet Network FC storage 2 options Lossless Ethernet Links FC network Converged Network Switch SAN sees host as FC

  9. Ethernet Header Ethernet Header CRC IP TCP iSCSI FCoE Header FC Header FC Payload CRC EOF FCS iSCSI and FCoE Framing • iSCSI is SCSI functionality transported using TCP/IP for delivery and routing in a standard Ethernet/IP environment Data iSCSI Frame TCP/IP and iSCSI require CPU processing • FCoE is FC frames encapsulated in Layer 2 Ethernet frames designed to utilize a Lossless Ethernet environment • Large maximum size of FC requires Ethernet Jumbo Frames • No TCP, so Lossless environment required • No IP routing FCoE Frame FC Frame

  10. FCoE Frame Format Destination MAC Address Bit 0 Bit 31 Source MACAddress IEEE 802.1Q Tag ET = FCoE Ver Reserved Reserved Reserved Reserved SOF EncapsulatedFC Frame (Including FC-CRC) EOF Reserved FCS FCoE Frame Formats • Ethernet frames give a 1:1 encapsulation of FC frames • No segmenting FC frames across multiple Ethernet frames • FCoE flow control is Ethernet based • BB Credit/R_RDY replaced by Pause/PFC mechanism • FC frames are large, require Jumbo frames • Max FC payload size is 2112 bytes • Max FCoE frame size is 2180 bytes • Also created a FCoE Initialization Protocol (FIP) for: • Discovery • Login • To determine if the MAC address is server provided (SPMA) or fabric provided (FPMA)

  11. Lossless Ethernet • Limit the environment only to the Data Center • FCoE is Layer 2 only • IEEE 802.1 Data Center Bridging (DCB) is the standards task group • Converged Enhanced Ethernet (CEE) is an industry consensus term which covers three link level features • Priority Flow Control (PFC, IEEE 802.1Qbb) • Enhanced Transmission Selection (ETS, IEEE 802.1Qaz) • Data Center Bridging Exchange Notification (DCBX, currently part of IEEE 802.1Qaz, leverages 802.1AB (LLDP)) • Data Center Ethernet is a Cisco term for CEE plus additional functionality including Congestion Notification (IEEE 802.1Qau) Enhanced Ethernet provides the Lossless Infrastructure which will enable FCoE and augment iSCSI storage traffic .

  12. Switch A Switch B PAUSE and Priority Flow Control • PAUSE transforms Ethernet into a lossless fabric • Classical 802.3x PAUSE is rarely implemented since it stops all traffic • Priority Flow Control (PFC), formerly known as Per Priority PAUSE (PPP) or Class Based Flow Control • PFC will be limited to Data Center • A new PAUSE function that can halt traffic according to priority tag while allowing traffic at other priority levels to continue • Creates lossless virtual lanes • Per priority link level flow control • Only affect traffic that needs it • Ability to enable it per priority • Not simply 8 x 802.3x PAUSE

  13. 10 GE Link Realized Traffic Utilization Offered Traffic 3G/s 2G/s HPC Traffic3G/s 3G/s 3G/s 2G/s 3G/s 3G/s Storage Traffic3G/s 3G/s 3G/s 3G/s 3G/s 5G/s LAN Traffic 4G/s 3G/s 4G/s 6G/s t3 t1 t2 t1 t2 t3 Enhanced Transmission Selection and Data Center Bridging Exchange Protocol (DCBX) Enhanced Transmission Selection (ETS) provides a common management framework for bandwidth management • Allows configuration of HPC & storage traffic to have appropriately higher priority • When a given load in a class does not fully utilize its allocated bandwidth, ETS allows other traffic classes to use the available bandwidth • Maintain low latency treatment of certain traffic classes Data Center Bridging Exchange Protocol (DCBX) is responsible for configuration of link parameters for DCB functions • Determines which devices support Enhanced Ethernet functions

  14. 40 & 100 Gigabit Ethernet • IEEE P802.3ba Task Force states that bandwidth requirements for computing and networking applications are growing at different rates, which necessitates two distinct data rates, 40 Gb/s and 100 Gb/s • IEEE target for standard completion of 40 GbE & 100 GbE is 2010 • 40 GbE products shipping today supporting existing fiber plant and plan is for 100 GbE to also support 10m copper, 100m MMF (use OM4 for extended reach) and SMF • Cost of 40 GbE or 100 GbE is currently 5 – 10 x 10 GbE • Adoption will become more economically attractive at 2.5x which will take a couple of years

  15. FCoE FC expertise / install base FC management Layer 2 Ethernet Use FCIP for distance Standards in process Deployments - FCoE and iSCSI Ethernet Leverage Ethernet/IP expertise 10 Gigabit Ethernet Lossless Ethernet iSCSI No FC expertise needed Supports distance connectivity (L3 IP routing) Strong virtualization affinity Standards since 2003

  16. Ethernet iSCSI SAN iSCSI Deployment • iSCSI grew to > 10% of SAN market revenue in 2008 * • Many deployments are small environments, which replace DAS • Strong affinity in SMB/commercial markets • Seeing strong growth of Unified Storage • Supports iSCSI, FC, and NAS • iSCSI with 10 Gigabit Ethernet becoming available * According to IDC, 3/09

  17. FCoE Server Phase (Today) FCoE with direct attach of server to Converged Network Switch at top of rack or end of row Tightly controlled solution Server 10 GE adapters may be CNA or NIC Storage is still a separate network Ethernet LAN Fibre Channel SAN 1 Gb NICs FC HBAs 10 GbE CNAs Rack Mounted Servers Storage Ethernet FC Converged Network Switch FC Attach

  18. FCoE Network Phase (2009 / 2010) Ethernet LAN Ethernet Network (IP, FCoE) and CNS Converged Network Switch Fibre Channel SAN FC Attach 10 GbE CNAs Rack Mounted Servers Storage Ethernet FC • Converged Network Switches move out of the rack from a tightly controlled environment into a unified network • Maintains existing LAN and SAN management Overlapping domains may compel cultural adjustments

  19. Convergence at 10 Gigabit Ethernet Two paths to a Converged Network iSCSI purely Ethernet FCoE allows for mix of FC and Ethernet (or all Ethernet) FC that you have today or buy tomorrow will plug into this in the future Choose based on scalability, management, and skill set Converged Network Switch 10 GbE CNAs Rack Mounted Servers Ethernet FC Ethernet LAN iSCSI/FCoE Storage FC SAN

  20. Time To Widespread Adoption 10 Gigabit Ethernet 02 09? Standard Widespread iSCSI 83 73 93 08 02 Defined Standard Widespread Widespread Standard 94 85 00 03 Defined Standard Widespread Defined FCoE 07 09 ?? Defined Standard? 1980 1990 2000 2010 Ethernet Fibre Channel

  21. Summary • A converged data center environment can be built using 10Gb Ethernet • Ethernet Enhancements are required for FCoE and will assist iSCSI • Choosing between FCoE and iSCSI will be based on customer existing infrastructure and skill set • 10 Gigabit Ethernet solutions will take time to mature • Active industry participation is creating standards that allow solutions that can integrate into existing data centers • FCoE and iSCSI will follow the Ethernet roadmap to 40 and 100 Gigabit in the future The Converged Data Center allows Storage and Networking to leverage operational and capital efficiencies

  22. Office of the CTO EMC Corporation

More Related