280 likes | 395 Views
10Gig Emergence in the Data Center. Marc Staimer, CDS, Dragon Slayer Consulting marcstaimer@earthlink.net. 10Gig “101” Applications Value Prop Issues Market Forecast Conclusions. Agenda. 10Gig “101”. What is 10Gig? Why should I care? 10Gig applications? What is the value proposition?
E N D
10Gig Emergence in the Data Center Marc Staimer, CDS, Dragon Slayer Consulting marcstaimer@earthlink.net
10Gig “101” Applications Value Prop Issues Market Forecast Conclusions Agenda
10Gig “101” • What is 10Gig? • Why should I care? • 10Gig applications? • What is the value proposition? • When will it matter?
What is 10Gig really? • Usually refers to the usable bandwidth • 10Gbps • Ethernet, Fibre Channel, & SONET OC192 • 10x 1Gig • 12.5Gbps total including overhead • InfiniBand (IBA) is slightly lower • 10Gbps is total and 8Gbps is net a.k.a. as 4X
Why should I care? • BW increasing > than ability to use it • My server I/O can’t use it • And my backbones are being swamped
10Gig applications • Switch Trunking • Server-to-storage fan-out • HPCC • DBMS clustering • Shared I/O eliminating bus contention
10Gig Ethernet 10Gig SONET 1Gig Ethernet 1Gig Ethernet 10Gig Ethernet 10Gig iSCSI Switch Trunking • Ethernet, FC, & IBA • Ethernet: 10/100/1000 to the edge, 10Gig Core • FC: 1,2,4 Gig to edge, 10Gig Core • IBA: 4X (10Gig) edge, 12X (30Gig) Core
FC Throughput Gain Example Source: Fibre Channel Industry Association
40 (1U) IA Application Servers 10Gig-4/2Gig FC Switches 10Gig FC SAN Storage Server-Storage Fan-Out Results • FC definitely, GigE (RDMA) maybe • Server-storage pt fan-out increases from 8:1 to 48:1
HPCC • InfiniBand • Potential 10GigE (RDMA) down the road • Key node-to-node issues • Very low latency (minimal fabric hops) • Very high bandwidth
Vertical rack space 12U 10Gig connection Copper Full bi-sectional bandwidth 10Gig/port Max node-to-node hops 3 Switches, 5 ASICs Latency Memory-to-memory = 6ms ~ List pricing < $1k/port 128 (4X) IBA ports in 12U HPCC Illustrated 128 node 4X IBA Fabric
Oracle RAC or DB2 Cluster FC SAN Storage IPoIB, uDAPL, SDP, SRP, & FCP, over IBA DBMS Clustering • Increasing DBMS performance • IBA (primary focus) • GigE • Value Prop • Lower latency • > IOPS • Higher throughput • Fewer connections • < complexity • < mgt
Lintel/Wintel servers FC SAN Storage 10Gig Shared I/O: Eliminating Bus Contention • 4X IBA HCA on PCI-X bus • Provides I/O for • TCP/IP to Ethernet • FCP to Fibre Channel • iSCSI to Ethernet • Shares 10Gig pipes • Transparent to apps • < cables • < costs • < complexity • Potentially doable on • FC or Ethernet w/RDMA
1/2/4 Gig 10Gig SAN Storage Blade Server “Fan-in” • Boot OS from external storage • More blades per storage device • High activity at startup
10Gig Issues • I/O Infrastructure • Timing, Availability, & Cost • Copper vs. Optical • Compatibility • MSAs
10Gig I/O Infrastructure • Current I/O Buses cannot handle 10Gig throughput • PCI = Max 4Gbps • PCI-X = Max 8Gbps • Each additional bus card cuts the max in half • Future I/O infrastructure is slipping to the right • PCI-X 2.0 = Max 26.4Gbps • PCI Express 1.0 = Max 128Gbps • Servers utilizing new I/O not available until late 04 early 05 • Storage utilizing new I/O not available until late 04 early 05
Infrastructure Optics ~ $1.2K to $5K/port Ethernet Switches ~ $29K/port Decreasing ~28%/yr ~ $8K by 2007 FC Switches ~ $1.5K/port (w/o optics) ~ $.5K/port by 2007 IBA Switches ~ $1K/port (w/o optics) Adapters 10Gig Ethernet NICs ~ $6K Timing: 2005 10Gig FC HBAs & Target ASICs ~ $5K Timing: 2005 4X IBA HCAs Timing: Now 12X (30Gig) Late 04 early 05 10Gig Timing, Availability & Costs
Gartner 10Gig Market Forecasts *Note: IBA numbers are calculated from the Gartner/Dataquest forecast
Copper Low cost Limited distance ~ 15 meters Not Cat 5 or 6 compatible Cat 7 work going on ~ 100 meters NOTE: Meter = 3.28 feet Optical High cost Multi-mode (common) Distance limited 300 to 550 meters Designed for single mode Dark fiber 10 Km 40 Km Up to 64 Km 10Gig Copper vs. Optical
10Gig Compatibility Question • Is 10Gig backwards compatible? • Ethernet • Yes • No • Fibre Channel • Yes • No
10Gig 10Gig Compatibility • 10Gig Ethernet & FC • Not backwards compatible • It’s the optics • And the encoding • 8B/10B • 64/66
XAUI 10Gig attachment unit interface XMGII 10Gig media independent interface Transponder Module containing Optical transmitter & receiver & mux that changes line rate MSA Multi-source agreement 802.3ak 10Gig over copper RDMA Remote direct memory access RDDP Remote direct data placement RDMA on TCP/IP & GigE 10Gig Definitions
Transponder MSAs Xenpak Intel, Agilent, Infineon, JDSU, Picolight XPAK Agilent, Intel, Picolight X2 Agilent, JDSU, Mitsubishi, OpNext, Optillion IBPAK Agilent, Infineon, InfiniCon, Mindspeed, Molex, OCP, Picolight, SUN, Tyco, W. L. Gore Transceiver MSA XFP: 10Gig serial transceiver JDSU Xenpak XPAK X2 XFP 10Gig MSAs
Module Holder 106 ckt Z-Axis Connector 4x Optical Pluggable Module 4x Copper Pluggable Module 4x Copper Cable 12x Optical Pluggable Module MPO Optical Cable Dual MPO Optical Cable IBPAK
10Gig Value Prop • Reduced Infrastructure Costs • < Cabling • < Connections • < Complexity • < Management • Increase Performance • > Throughput • < Latency
10Gig 10Gig Market Emergence • IBA: Now • Mainstream 2004 • Ethernet: Now • Mainstream 2005 • Fibre Channel: 2004 • Mainstream 2005/2006
10Gig Conclusions • It’s coming • There are real cost justifiable applications • 1st applications are in 2004 • Mainstream market = 2005/2006 • Hockey stick = 2007