820 likes | 2.25k Views
The VNXe Series. Technical Presentation. Name Title October 2011. Topics. Data Services (continued) Antivirus File Level Retention Backup Windows Integration & Support Management Support Ecosystem. Architecture Hardware Operating Environment Storage Pools Data Services
E N D
The VNXe Series Technical Presentation Name Title October 2011
Topics • Data Services (continued) • Antivirus • File Level Retention • Backup • Windows Integration & Support • Management • Support Ecosystem • Architecture • Hardware • Operating Environment • Storage Pools • Data Services • Thin Provisioning • File-level Deduplication & Compression • Data Protection • Snapshots • Replication
Simple. Efficient. Affordable. A new way to think about shared storage Configure inseconds Get more capacity at less cost Rest easy, rock solid Starting under $10K
SP B SP A • VNXe OE • File • Block • EMC Apps • Unisphere • VNXe OE • File • Block • EMC Apps • Unisphere Flare Backend disks SP A … VNXe's Purpose Designed Architecture Integrated Unified Storage Designed for IT generalist Maximizing simplicity and easy of use (no X-Blades, Control Stations etc.) Providing balanced performance, scalability and flexibility Multi-core Processors Shared Memory Resources iSCSI CIFS NFS Storage Processor A Storage Processor B Highest Capacity (NL-SAS) Performance (SAS) Flexible Drive Options
VNXe SeriesHardware Simple. Efficient. Affordable.
VNXe3100 2U Platform Overview • Chassis • 2U DPE (Disk Processor Enclosure) • 6Gb SAS-2 drive interface • Max 12, 3.5” Drives per enclosure • Dual Energy Star Gold Power Supplies • Power Consumption ~ 330W • 1 or 2 SPs - CPU Module Canisters • Battery-backed Write Cache (BBWC) for single SP configurations • Dimensions: 20in depth x 3.5in height x ~19in width • 2 CPU Modules, each containing • Intel, Dual Core Processor • 2, DDR3 1066MHz DIMM slots • Integrated, 3-cell Li-ion BBU • Solid State Disk (SSD), eFLASH uSSD • I/O Technology per SP • One Back End SAS-2 Port ( 6 Gb/s x4) • Two Embedded GbE Host Ports • One I/O Slot per CPU module • 4 x 1GbE • GbE Management Port • Serial Console and USB Port Front View 12 x 3.5” Disks Rear Storage Processor
Cost-effective Data Persistence Single Storage Processor Configuration Low cost entry point in the VNXe Series Upgradeable to dual processor configuration Cache Protection Module Fits into second storage processor slot Vault-to-Flash (VTF) technology Runs off of low-power battery technology No need for standby power supplies System recovery Memory contents restored from SSD Buffer and cache pools/pages are recreated at same physical addresses as initial boot System verifies restore operation and resumes I/O processing Cache Protection Module VNXe3100 - Single SP Configuration
VNXe3300 3U Platform Overview • Chassis • 3U, DPE (Disk Processor Enclosure) • 6Gb SAS-2 drive interface • Max 15, 3.5” Drives per enclosure • Holds 2 SP “Suitcases” with Integrated Cooling • Dual Energy Star Gold Power Supplies • Power Consumption ~ 500W with 15 drives • Battery Backup Module in Power Supply • Dimensions: 24in depth x 5.25in height x ~19in width • NEBS Level 3 • DC Power option • 2 CPU Modules, each containing • Quad Core Processors • 3 DIMM Slots with 3 channels @ 800MHz • PCI-E x4 CMI Path • Solid State Disk (for persistence) Front View 15 x 3.5” Disks • I/O Technology per SP • One Back End SAS-2 Port (6 Gb/s x4) • Four Embedded GbE Host Ports • One I/O Slot per CPU module • 4 x 1GbE, • 2 x 10GbE • GbE Management Port • Serial Console and USB Port Rear Storage Processor
Flexible Storage Capacity • Disk Options, 3.5” Drives • SAS Performance: 300GB, 600GB, 15K RPM • Near line SAS Capacity: 1TB, 2TB, 7.2K RPM • Flash (Tier 0): 100GB (VNXe3300 only) • Scale • Up to 7 additional disk enclosures per system • Max 120 drives in VNXe3300 • Max 96 drives in VNXe3100 • Mix drive types within an enclosure • Optional “disk pack” offerings • SAS Performance pack: RAID 5 or RAID 10 • SAS Capacity pack: RAID 6 • Flash pack: RAID 5 • Scale to 240 Terabytes with 2TB drive Flexible Storage Options
Modular system design All major components are customer replaceable units Easy to identify and access Simple to service and upgrade Clear visual repair instructions Tool-less access to replaceable components “How-to” videos Minimal component sparing Efficient CRU replacement process Modular Design
VNXe Interoperability and Standards • US Dept of Defense • Common Criteria • DISA STIG • Telecom • Telcordia NEBS Level 3 • Host Operating Systems • Microsoft Windows • Linux: SUSE, Red Hat, AsiaNUX • UNIX: AIX, HP-UX, Solaris • VMWare • Hyper-V • Citrix XenServer • Storage Applications • NDMP Backup Software • Anti-Virus Scanning Software
High Availability • Platform • No single point of failure • n + 1 power and battery backup • Redundant, hot-pluggable components • Function • RAID protection • Active/Active controllers • Data cache mirroring between controllers • Battery-backed cache for single SP configurations • Dynamic failover/failback • Automatic management control failover • Vault-to-Flash technology • Service • Non-disruptive upgrades • Hot-swap CRU and drive replacement • Remote maintenance, call-home, automatic diagnostics
VNXe Vault to Flash – Cache Persistence CIFS Exchange 3. I/O VNXe 192.168.5.1 \\cifs\myshare 192.168.5.2 IQN:C0T0L0 Management Services Management Services Platform Services Platform Services 2. 1. PowerDown Routine Power Down Routine Memory Memory File Services File Services Power Down Code waits for dump complete Disables battery hold up Cache Cache Block Services Block Services Network, I/O, Service Shims Network, I/O, Service Shims Cache Cache Cache Cache Cache Cache SAS PCIe SAS PCIe NIC NIC 2. Power Down Routine sheds unnecessary power draws Checksums Contents of memory to dump Initiates memory dump to SSD AC power is lost to array A battery holds up SP and SSD device Transition to special power fail routine
CIFS Exchange VNXe – Deployed and Operating Normally I/O VNXe 192.168.5.1 \\cifs\myshare 192.168.5.2 IQN:C0T0L0 Management Services Management Services Platform Services Platform Services File Services File Services Block Services Block Services Network, I/O, Service Shims Network, I/O, Service Shims SAS PCIe SAS PCIe NIC NIC
VNXe HA Services Support – SP Failure CIFS Exchange I/O VNXe 192.168.5.1 \\cifs\myshare 192.168.5.2 IQN:C0T0L0 192.168.5.2 IQN:C0T0L0 Management Services Platform Services Management Services Platform Services 3. File Services File Services File Services 2. 1. Block Services Block Services Network, I/O, Service Shims Network, I/O, Service Shims PCIe SAS NIC PCIe SAS NIC 3. Health/Alert Business Logic detects status change Indicates Alert to user Email Alert to EMC Support & configured email address 2. HA Service starts new file service IP network failover is invoked iSCSI LUN is exported Cluster Manager Detects problem with other node Validates node failure Invokes HA failover service
VNXe HA Services Support – Service Failure CIFS Exchange I/O VNXe 192.168.5.1 \\cifs\myshare 192.168.5.2 IQN:C0T0L0 Mgmt IP 192.168.5.1 Mgmt IP 192.168.5.1 Management Services Management Services Management Services Platform Services Platform Services X 3. File Services 1. 2. File Services Block Services Block Services Network, I/O, Service Shims Network, I/O, Service Shims SAS PCIe SAS PCIe NIC NIC 3. Health/Alert Business Logic detects status change Indicates Alert to user Email Alert indicates to Support and configured emails Local Resource Manager Detects problem Tries restarting services 3 times Invokes HA failover service 2. HA Service starts Management Service on other node IP network failover is invoked Management Services started on 2nd Storage Processor
VNXe HA Networking Feature Support VNXe Series • Link Aggregation • IEEE 802.3ad standard for port grouping; improves availability • One port fails, other ports take over • Automatic configuration with statistical load balancing based on source and destination MAC addresses • Does not increase single-client throughput, may increase throughput if there are multiple clients. Switch Switch Switch LINK FAILSAFE NETWORKING PLUS TRUNKING IMPLEMENTATION • FailSafe Networking (FSN) • Network configuration beneficial against switch & cable failures • End-to-end network availability • Automatically configured between SPs Primary paths Standby paths Network Accounting database server Sales database server NetWare file server UNIX file server VNXe Series E-mail server • VLAN, 802.1Q-VNXe in a VLAN-enabled network • Added security and Ease of management UNIX file server VLAN NetWare file server VLAN Sales database server VLAN Accounting database server VLAN E-mail server VLAN Accounting Sales Engineering
VNXe Storage Pools Stage 2 Disks Stage 1
VNXe Storage Pools Stage 2 Global Performance RAID Groups Disks Stage 1
VNXe Storage Pools LU’s Stage 2 Global Performance RAID Groups Disks Stage 1
VNXe Storage Pools Disk Volumes LU’s Stage 2 Global Performance RAID Groups Disks Stage 1
VNXe Storage Pools Stripe Volume Disk Volumes LU’s Stage 2 Global Performance RAID Groups Disks Stage 1
VNXe Storage Pools Pool Volume Stripe Volume Disk Volumes LU’s Stage 2 Global Performance RAID Groups Disks Stage 1
Thick Provisioning – Shared Folders User provisions a Shared Folder… Pool Volume Stripe Volume Disk Volumes Flare LUN’s CIFS Shared Folder 10 GB Reserved storage (Size) RAID Groups and Global Pools Pool Volume Disks 18.5 GB
Twice the Efficiency More storage, better utilization, lower cost File Deduplication and Compression 2X Thin Provisioning Traditional storage more efficient
Thin Provisioning – Shared Folders User provisions a Shared Folder… Pool Volume Stripe Volume Disk Volumes CIFS Shared Folder 5 GB Flare LU’s Allocated (Initial Size) 15 GB Reserved storage (Maximum Size) RAID Groups and Global Pools Pool Volume Disks 18.5 GB
User A 10 GB User B10 GB User C10 GB Logical application and user view Physical allocation 4 GB Physical consumed storage 2 GB 2 GB VNXe Thin Provisioning • Capacity oversubscription • File systems • iSCSI LUNs • Logical size greater than physical size • Physical allocation in real time to logical size • VNXe Virtual Provisioning safeguards • Automatic File System Extension past logical size • iSCSI Dynamic LUN Extension past logical size VNXe THIN PROVISIONING
VNXe FILE DEDUPLICATION Deduplication-enabled file system File A File A File A File B File C Compress and deduplicate file File B File A File C Stub A Stub A Stub B Stub C Stub C VNXe File-Level Deduplication and Compression Architecture • Deduplication processes: • Policy engine scans all production files looking for data that meets certain criteria: • Last accessed/modified time, minimum/ maximum size, file extension, directory • Or, files selected via API • Candidates are compressed • Duplicated files are removed by using SHA-1 or byte-by-byte data comparison • Stubs point to files in the hidden deduplication store • Deduplication is transparent to clients: • Process runs in the background • Files pre- and post processing look the same • Hidden deduplication store is a “working area” that is not visible • Reads decompress data in memory, not on disk • Writes are stored alongside deduplicated data • Modifying one instance of a file has no impact on other deduplicated instances Stub B File B Active file User visible area Hidden deduplication store
VNXe File Deduplication and Compression: Data Modification • File modification does not cause file to be reinflated: • Allows targeting large files (Terabytes sized), and files that may show some level of activity while preserving user experience on access to these files • Deduplicated files can be modified while still preserving commonality: • New data is stored alongside old • File is either reinflated or reprocessed when new+old reaches logical file size: • Reinflated if selected for deduplication by the internal policy engine • Reprocessed if selected for deduplication via API VNXe FILE SYSTEM User visible area Hidden reduced data store File Data New Data File Data
VNXe: Takes Care of the Complexities of Data Protection • Problem: • Calculating the space required for data protection is complicated. It require formulas and an intimate understanding of underlying snapshot and replication technology. • Typical Result: • Data Protection either gets slapped together in a “probably should work” fashion, or a Professional Services engagement is required. • VNXe eliminates complexity by: • Integrating best practices for provisioning data protection reserve space • Easy to configure snapshot schedules and Application Consistent Snapshots • Simplified replication configuration and management
Data Protection Storage Recommendations • VNXe provisioning best practices also cover Data Protection storage • Application specific best practices for data protection space. • Thin provisioning aware: Applications using thin provisioning will not require as much data protection storage. • Auto-adjustment of Data Protection space as your production storage resource grows
The differences between Shared Folder & iSCSI snapshots • Shared Folder snapshots use Copy On first Write (COW) • Snapshot creation and deletion is very quick • The space required for snapshots continues to grow as changes are made to the production volume • iSCSI snapshots use Redirect on Write (ROW) • Snapshot creation is very quick • Perfect for your latency sensitive applications • Snapshots don’t grow once created
PFS files at 12PM PFS files at 12PM PFS files at 12PM PFS files at 12PM Example: VNXe Shared Folder Snapshot • Provides a read-only or read-write, point-in-time view of data • File system-level or iSCSI LUN view • Multiple snapshot versions – 96 r/o + 16 r/w • Typically has a 10% capacity overhead • Primary applications include • Simple user initiated file un-deletes • Efficient logical backup and restore • Copy on first write technology • Avoids fragmentation of primary FS • One side file data protection space for all file system snaps • Automatic extension of data protection space • Online full file system instant restore from any snap • Snaps can be deleted out of order Production file system .ckpt_PFS_3PM Single SavVol .ckpt_PFS_12PM PFS_3PM .ckpt_PFS_9AM PFS files at 4PM PFS_12PM VNXe Family PFS files at 4PM PFS files at 4PM PFS_9AM PFS files at 4PM SavVolextended Production file system Empty Space .ckpt_PFS_4PM PFS_4PM .ckpt_PFS_3PM .ckpt_PFS_12PM PFS_3PM • Production file system .ckpt_PFS_9AM • Snapshot • Logical point-in-time copy PFS files at 4PM Restore PFS to 12:00 p.m. snap PFS_12PM PFS files at 4PM PFS files at 4PM PFS_9AM PFS files at 4PM
Snapshots Consistency Options • Crash consistent snapshots • VNXe provides the ability to create manual and scheduled crash consistent snapshots through Unisphere • Shared Folder: • Crash consistent snaps are acceptable • iSCSI: • Good for an emergency. Only applicable to applications with a single virtual disk
Snapshots and Application Consistency • iSCSI Application consistent snapshots • Using 3rd party backup applications • Leverages the VSS framework for Microsoft Applications to create “Hardware Snapshots” for the application • Preferred Solution: Using the Application Protection Suite • Application protection Suite includes Replication Manager (RM) software • An RM agent runs on the host, which guarantees the applications using the storage are quiesced prior to creating snapshots • Leverages the VSS framework for Microsoft Applications • For applications with multiple virtual disks, Application Sets ensure consistency across all the storage resources • Exchange will automatically undergo snapshot consistency checks following the creation of the snapshot
Data Recovery options • Snapshots • Shared Folder: User initiated recovery using hidden .ckpt directory or Microsoft “Previous Versions” interface • iSCSI: Snapshot promotion and revert performed using either Unisphere or RM Microsoft Previous Versions tab
Snaps Snaps FS/ LUNVDM FS/ LUNVDM VNXe Replication • Replication can be*: • VNXe to VNXe • VNXe to VNX • VNXe to Celerra (DART 6.0+) • Celerra (DART 6,0+) to VNXe • VNXe to VNXe fan-out & fan-in ratio 5 to 1 • Same technology for file & block Productionsite Disaster recovery remote site Network Network WAN • * See the “EMC Simple Support Matrix” (ESSM) for the VNXe Series, url: Simple Support Matrix EMC VNXe Seriesfor any prerequisites and requirements.
Replication Frequency • Shared Folder replication driven from Unisphere • Manual synchronization • Automatic synchronization based on RPO (Recovery Point Objective) • iSCSI replication using RM • Schedule-driven synchronization frequency • Application consistency is guaranteed
Œ • Virus-checking request • Write/close orfirst read after new virus-definition file • Virus-checking signatures VNXe Anti-Virus Support • Shared bank of virus-checking servers • Can deploy multiple vendors’ engines concurrently • Virus-checking server only reads part of files • File access is blocked until it is checked • Scan after update • Scan on first read • Automatic access-time update • Notification on Virus Detect • Anti-Virus Sizing Tool • Runs over VNX Event Enabler VNXe Family User Virus-checking server