1 / 29

StorageWorks LeftHand SAN

StorageWorks LeftHand SAN. Jasen Baker Storage Architect jasen@hp.com. HP LeftHand Solutions. Technology leader Founded in 1999 as LeftHand Networks A recognized leader in iSCSI SANs More than 14,000 installations Over 4,000 customers Ranked as “Visionary” Gartner’s Magic Quadrant

colton
Download Presentation

StorageWorks LeftHand SAN

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. StorageWorks LeftHand SAN Jasen BakerStorage Architect jasen@hp.com

  2. HP LeftHand Solutions Technology leader Founded in 1999 as LeftHand Networks A recognized leader in iSCSI SANs More than 14,000 installations Over 4,000 customers Ranked as “Visionary” Gartner’s Magic Quadrant Acquired by HP in Nov, 2008 All inclusive feature-set focus “Pay-as-you-grow” architecture Performance Scalability Availability Simple management

  3. Storage requirements

  4. HP StorageWorks array portfolio XP EVA LeftHand SAN XP24000 XP20000 MSA2000 Consolidation and Performance All-in-One • Always-on availability • Data center consolidation + disaster recovery • Large-scale Oracle/SAP applications • HP-UX, Windows, + 20 more, including NonStop & mainframe • Outstanding TCO • FC & iSCSI host ports • FC & FATA HDD • Storage consolidation + disaster recovery • Simplification through virtualization • Windows, HP-UX, Linux, VMware, Citrix, Hyper-V, MAC OS, and More • Scalable, Clustered IP SAN • Scalable performance • SAS & SATA HDD • Cost effective high availability • Simplified mgmt for virtualization • VMware, Citrix, Hyper-V, Windows, Linux, and More • Low cost consolidation • 4GB FC, 1GbE iSCSI, and 3Gb SAS • SAS & SATA together • Controller-based snapshot/clone • Windows, Linux, VMware • WEB, Exchange, SQL • Simple Unified Storage • iSCSI SAN and Optimized NAS with integrated snapshots, backup, replication, and simple management. • Windows, Linux, VMware, more. Business Continuity and Availability

  5. The Value of iSCSI SAN Architecture • Installation and Management • Build dedicated GigE Subnet • Load iSCSI Initiator on Server • Install/Configure Network Storage Modules • Install Centralized Management Console • Mount and format volumes

  6. 5 features with every LeftHand SAN

  7. Typical Storage Array Architecture Scale-up Storage • Monolithic Array • Not scalable • Controller head Becomes bottleneck • Scales capacity only • Single point of failure • Forklift upgrades

  8. SAN/iQ Storage ClusteringTrue clustering brings reliability, performance, and ease of management Storage clustering Aggregates critical components Data is load balanced Predictable scalability Grow on your terms Scale-out storage Non-disruptive Throttle bandwidth Storage tiering Optimized for type of data Online volume migration Simple centralized management One step server-volume mapping Integrated performance management Storage Node SAS CPU NICs Disk Cache Hardware RAID SAN/iQ Storage Software Redundant Cooling Redundant power SATA Centralized Management Console

  9. High Availability: Need is Greater With Server Virtualization • Application server consolidation onto fewer physical servers exposes users to more application downtime in event of a hardware failure. • Storage consolidation also creates a single point of failure! Traditional Servers Virtualized Servers One server failure, ALL applications goes down One server failure, one application goes down

  10. Network Storage Modules

  11. Full Featured Virtual SAN SAN/iQ within an ESX virtual machine Virtualizes an ESX server’s internal disk resources Significant storage footprint (up to 10TB) Only SAN appliance on VMware SAN/Storage HCL Highly Available storage across multiple ESX systems Shared storage for VMs VSA VSA VSA Virtual SAN Appliance for VMwareESXHigh Availability for Server & Storage For Remote/Branch Offices

  12. LeftHand SANsCost effective storage for virtualization, easy to implement Site 2 Site 1

  13. Live Demonstration

  14. SAN/iQNetwork RAID RAID 50 and RAID 100 - A combination of RAID 5 or 10 and RAID 0, where data is protected at the drive level with Raid 5 or 10 to provide for recoverability due to a disk drive failure, and the Data Volume (LUN) is then striped or spanned across the RAID set with no additional protection (RAID 0). RAID 5+ RAID 0 = RAID50 RAID 10+ RAID 0 = RAID100 Loss of an NSM within the cluster However With RAID 50 or 100 the loss of an entire unit would result in the loss of the volume since a portion of the data would be lost that comprises the volume. This could result in the server or application going down. This is actually the highest level of protection that some of our competitors offer (either within a single enclosure across multiple RAID 5/10 sets, or spanned across multiple units configured with RAID5/10. Our Network Raid Technology over comes this limitation With RAID 50 a drive in 1 or more units could fail and the volume would still be available. With RAID 100 2 drives in 1 or more units could fail and the volume would still be available. A B Volume 1 C RAID 50/100 Application Server

  15. SAN/iQNetwork RAID Network RAID (across Cluster Protection) SAN/iQ Network RAID stripes and mirrors multiple copies of data across the storage modules in a cluster, completely eliminating any single point of failure. Network RAID is specified on a per volume basis and can be used in conjunction with RAID 0 for performance, and RAID 5 or 10 for added protection and reduced rebuild times Replication level None (0), 2, or 3 Example In this example Volume 1 is striped across 5 individual NSM’s configured with Raid 5 disk protection Replication level = 2 BENEFITS Load Balancing Single drive loss protection in each NSM (from RAID5) Multiple (alternate) NSM loss protection in the cluster (From Network RAID) 10 GigE connection points to Volume 1 Highly redundant, fault tolerant system Enable spread of data over a dynamic # of spindles Loss of an entire NSM within the cluster Data access speeds remain constant No parity calculation required No interruption of application access to data Can Sustain Loss of multiple NSM’s D A A A B Volume 1 B B C C C D SAN/iQ Network RAID level 2 Application Servers

  16. SAN/iQNetwork RAID Example In this example Volume 1 is striped across 5 individual NSMs Raid 5 disk protection Replication level = 3 BENEFITS Load Balancing Single drive loss protection in each NSM Multiple adjacent NSM loss protection in the cluster 10 GigE connection points to Volume 1 Highly redundant, fault tolerant system Enable spread of data over a dynamic # of spindles Loss of NSM’s within the cluster Data access speeds remain constant No parity calculation required No interruption of application access to data Can Sustain Loss of multiple Adjacent NSM’s D A A A A B B B B B B Volume 1 C C C D D C D SAN/iQ Network RAID Level 3 Application Servers

  17. SAN/iQ Multi-site SANReal-time protection from site failure Protect Storage By: Racks Room Floor Building Site Keep Data Online During: Facility disruption Natural disaster Site maintenance SAN/iQ Cluster SAN/iQ Multi-site SAN A D A D A B A B C B C B C D C D Volumes Remain Online

  18. SAN/iQ Multi-Site SAN and VMware ESX Cluster In the event of a site failure SAN/iQ keeps volumes available When the failed site comes back online ESX rebalances virtual machines (DRS) ESX cluster is configured with equal hosts in each site ESX High Availability boots up virtual machines lost at the failed site SAN/iQ Network RAID replicates data between sites synchronously SAN/iQ Cluster is configured with equal storage in each site University of Maryland School of Medicine Campus VMware ESX HA Cluster A B A B A D A D SAN/iQ Multi-site SAN C B C B C D C D Volumes Remain Online 6 Blocks

  19. Remote Office Solution PackCost effective disaster recovery, no hardware required SAN/iQ SAN/iQ SAN/iQ SAN/iQ • Replication for up to 10 remote sites • No SAN hardware at remote sites • No other vendor can offer this • All other offerings are host based or require hardware at remote sites • Included for no additional cost with: • Virtualization SAN • Multi-Site SAN • Support included with physical SAN SAN/iQ Cluster SAN/iQ Cluster SAN/iQ Cluster

  20. Auto Grow Auto Grow Auto Grow Stranded Storage 50 GB Actual Capacity 30 GB Volume Size (Virtual Capacity) SAN/iQ Thin ProvisioningRaise storage utilization, increase return on investment, and reduce costs Cost Savings • Purchase only what you need today • Allocate only as data is written • No reserve necessary • Integrated with SmartClones, Snapshots and Remote Copy Delay Future Storage Purchases • Grow capacity as warranted Simple to Manage • Enable/Disable with radio button on per-volume basis LeftHand Networks Thin Provisioning Traditional Provisioning 100 GB

  21. SAN/iQ Smart CloneReduces storage costs with efficient volume/snapshot copies Volume 100GB of data, no duplication 0GB 0GB 0GB 0GB 500GB, 400GB duplicated data 100GB 100GB 100GB 100GB • Space-efficient copies of volumes and/or snapshots, no duplicated data • Pro-active NOT Re-Active Deduplication! • New Smart Clone volumes contain no data, only changes are stored • Perfect for storing system images efficiently (boot from SAN and desktop) • Instant provisioning of volume copies for test and development • Testing can be done with real data • No impact to production SmartClone Volumes Fully functional volumes Only changes stored NO DUPLICATED DATA 100GB Original data, all clones use single copy 21

  22. SAN/iQPerformance Manager • Details • Performance Monitor in the User Interface • Performance SNMP MIBs available • Performance Data Export capability for trending and analysis • Managed Objects • Cluster • Application Servers • Volumes and Snapshots • Storage Node • Performance Data Available • Throughput (MB per second) • IOPS (I/Os per second) • Latency (milliseconds) • Q-depth (I/Os pending) • Cache Hits (% cache hits) • CPU (% usage) • Memory (% usage) • Network Utilization (% utilization)

  23. Matching Virtualization Features

  24. Questions?

  25. Server Server Storage Cluster SAN/iQ Network Connect - Adaptive Load Balancing (ALB) Bonding Active Path Passive Path Server • Fault-tolerant • Easy to configure – No switch configuration necessary • 2 Gbit read / 1 Gbit write GigE Trunk GigE Trunk

  26. SAN/iQ Network Connect - Link Aggregation (802.3ad/LACP) Bonding Active Path Passive Path Server Server Server Storage Cluster A • NOT Fault-tolerant • Requires switch configuration • 2 Gbit read/write per NSM

  27. Server-3 Server-4 Virtual IP - Store and Forward Architecture Con: Creates single Server-2 Server-1 bottleneck NSM VIP hosts all data connection sessions. NSM-1 NSM-2 NSM-3 VIP A B C D E F G H I

  28. VIP – iSCSI Load Balancing Pro: Datapath Load is Distributed in a round robin Server-1 Server-2 Server-3 Server-4 Across all NSMs Each NSM shares Close to an equal number of sessions each. NSM-1 NSM-2 NSM-3 VIP A B C D E F G H I

  29. LeftHand MPIO (DSM) Pro: Datapath Load is distributed across all NSMs Server-1 Server-2 Server-3 Server-4 Con: Windows 2003 or 2008 only Each NSM hosts distributed IO load share for all volumes. NSM-1 NSM-2 NSM-3 VIP A B C D E F G H I

More Related