1 / 36

Track 4: How to build trouble-free large SANs up to thousand(s) of ports

Learn how to build trouble-free large Storage Area Networks (SANs) with thousands of ports, and eliminate the common scaling pain. This presentation discusses current architectures, issues with traditional approaches, and solutions for guaranteed bandwidth, change management, database bloat, QoS, and troubleshooting.

eemery
Download Presentation

Track 4: How to build trouble-free large SANs up to thousand(s) of ports

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Track 4: How to build trouble-free large SANs up to thousand(s) of ports Dragon Slayer Consulting Marc Staimer, President & CDS marcstaimer@earthlink.net 26 April 2004

  2. Large SANs Agenda • SAN Definition 2004 • Current Large SAN Architectures • Issues w/current architectures • Eliminating SAN Scaling pain • Summary

  3. 7 yrs sales 7 yrs sales mgt 10 yrs marketing & bus dev Storage & SANs 6 years consulting Launched or participated 20 products Paid Consulting > 70 vendors Unpaid Consulting > 200 end users Known Industry Expert Speak ~ 5 events/yr Write ~ 3 trade articles/yr Dragon Slayer Background

  4. SAN SAN Definition 2004

  5. Audience Response Raise your hand if you now have or plan to have within 12 months an all-encompassing SAN infrastructure into the thousands of ports.

  6. Large SAN Architectures • Traditional (a.k.a. Victorian) • Planned/Gated Communities • Urban Sprawl

  7. Audience Response By a show of hands, what SAN architecture have you implemented? • Core-to-edge • Mesh • SAN Islands • Not sure

  8. Traditional: a.k.a. Victorian • Mesh • Switch-switch interconnect • Core-to-edge • Guaranteed hop count & latency • Dual fabric typical for both

  9. Issues with Traditional Approaches • Change management • Guaranteed bandwidth • Fabric disruption propagation

  10. Change Management • Change “No” management • Lot of coordination • Servers, storage, SAN, cables & facilities • Re-architecting • Switch ports have to be reallocated for ISLs • Zones, cabling, and LUN masking must be redone • Followed up with shakedown & troubleshooting • Sometimes requiring back out of the change

  11. Guaranteed Bandwidth • Lack of user definable QoS • Some applications have higher priorities than others

  12. Fabric Disruption Propagation • Fabric disruptions anywhere… • …propagate throughout the fabric everywhere • RSCNs • Zone changes, add switches or HBAs Salud! Ah choo!

  13. Traditional Approaches have led to Urban Sprawl: a.k.a SAN Islands • IT is dynamic • Most organizations do not plan well • Minimizes disruption effects of change • Doesn’t eliminate disruptions …this This becomes…

  14. Issues with SAN Islands • Limits SAN benefits • Storage consolidation limited by island • Management touches expand

  15. Eliminating SAN Scaling Pain: The Market Requirements • Fabric disruptions • Large fabric latency • Intra-fabric switch ASIC hops • Database bloat • QoS • Change management • Correlating storage provisioning, SANs, & policies • Troubleshooting

  16. Fabric Disruptions • RSCNs • Switch, HBA, Zoning • Changes • Market requirement • < fabric disrupts

  17. Intra-Fabric Switch ASIC hops • Hop number affects latency • Latency is cumulative • Affects end user response times • Users demand predictability • Mesh and/or SAN islands = unpredictable • Locality = predictability again • Core-edge = predictable • Market requirement • Minimize latency

  18. SAN Database Bloat • As fabrics get larger • FSPF databases get larger…and slower • Name services get larger…and slower • Market requirement • Keep databases small

  19. QoS • Policy based bandwidth matching • Providing each application bandwidth based on • User defined requirements and thresholds • Market requirement • Optimize bandwidth • Not to waste it

  20. Change Management • Market requirements include • Automation • Negative impact minimization • Audit trail • Change simulation, planning, & validation • Correlation of LUN maps, zones, pathing policies • Work plans for all of the departments involved • Simple, “brain dead” trouble shooting

  21. Correlating Storage Provisioning, SANs, & policies • Efficient storage mgt = < SAN • Market requirements include • One interface for both storage &and SAN mgt • Policy based • Enforcement capable

  22. Troubleshooting • Market requirements include • Make it brain-dead simple • Make it quick • Make it easy AND cheap

  23. Audience Response By a show of hands, which is your worst SAN scaling pain? • Fabric disruptions • Large fabric latency • Database bloat • QoS • Change management • Storage, SANs, policies correlation • Troubleshooting

  24. Solutions that Eliminate SAN Scaling Pain • HBA RSCN switch suppression • Automated change mgt software • SAN Masking-a.k.a. SAN routing • SAN segmentation • Planned communities • QoS • SAM • Troubleshooting tools

  25. HBA RSCN Switch Suppression • Stops unimportant HBA RSCN disruptions • From disrupting other HBAs • Significantly < zoning requirements • Vendors include • QLogic • McDATA RSCN RSCN RSCN

  26. Plan Change Predicts Impact Implements Change Validates Change Logs Change History Correlates Storage & SAN changes LUNs Zones Pathing Policies Vendors include Onaro Automated Change Management Software

  27. Analogous to LUN masking Routes specific data Between SAN islands Visibility between specific WWNs Eliminates disruptions Between SAN islands Increases SAN scalability > switches from 239 to 57,121 Simplifies management Both ongoing & change mgt. Heterogeneous SANs Address translation (domain & WWN) Eliminates ATL forced fabric merges Increases availability SAN Fabric B SAN Fabric A VSAN 2 VSAN 1 SAN Fabric C SAN Masking-a.k.a. SAN Routing

  28. Works over FC And IP networks iFCP and FCIP Vendors include McDATA Eclipse/IPS Cisco MDS:VSAN Routing Brocade Multiprotocol Router LightSand 8100 SAN Fabric B SAN Fabric A VSAN 2 VSAN 1 SAN Fabric C SAN Masking continued

  29. Analogous to large storage controller Start large & subdivide One physical fabric Many logical ones Vendors include Cisco MDS:VSANs McDATA Dynamic Partitioning CNT (04) SAN Segmentation: a.k.a. Planned Communities

  30. SAN throughput allocation Based on IT priorities Policy based Recognizes App performance Requirements differ OLTP > than data migration, etc. Vendors include SANdial: Shadow 1400 Inter & intra-switch Cisco: MDS Intra-switch McDATA (04) CNT (04) Quality of Service: QoS QoS

  31. SRM + SAN mgt Storage Provisioning Block & File Heterogeneous Policy based mgt Policy enforcement tools One look & feel App performance mgt Optimizes ecosystem Vendors include EMC Softek AppIQ HP IBM Creekpath VERITAS Storability TekTools CA System Area Management: SAM

  32. Simplified Problem isolation Problem resolution Performance issues Vendors include Cisco SPAN, rSPAN SANdial Network Performance Analyzer Easier Troubleshooting Tools

  33. Switches Currently up to 256 ports Up to 1024 2H 2004 Fabrics Traditional 239 switches 239 x 256 = > 61K ports Theoretical (new technologies) 239 switch domains 239 switches/domain 256 ports/switch = > 14M ports How Big Can SANs Grow?

  34. Conclusion • SAN Scaling today is painful • New generation software & hardware • Provides pain relief • Test & verify

  35. Thank you. Questions?

  36. Mr. Staimer will be available in the Ask-the-Expert booth in the Exhibit Hall: Monday 5-6 PM

More Related