1 / 86

Storage & Hyper-V: The Choices you can make and the things you need to know

Storage & Hyper-V: The Choices you can make and the things you need to know. Jeff Woolsey Principal Group Program Manager Windows Server, Hyper-V WSV312. Session Objectives And Takeaways. Understand the storage options with Hyper-V as well as use cases for DAS and SAN

royal
Download Presentation

Storage & Hyper-V: The Choices you can make and the things you need to know

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Storage & Hyper-V: The Choices you can make and the things you need to know Jeff Woolsey Principal Group Program Manager Windows Server, Hyper-V WSV312

  2. Session Objectives And Takeaways • Understand the storage options with Hyper-V as well as use cases for DAS and SAN • Learn what’s new in Server 2008 R2 for storage and Hyper-V • Understand different high availability options of Hyper-V with SANs • Learn performance improvements with VHDs, Passthrough & iSCSI Direct scenarios

  3. Storage Performance/Sizing • Important to scale performance to the total workload requirements of each VM • Spindles are still key • Don’t migrate 20 physical servers with 40 spindles each to a Hyper-V host with 10 spindles • Don’t use left over servers as a production SAN

  4. Windows Storage Stack • Bus – Scan up to 8 buses (Storport) • Target – Up to 255 targets • LUNs – Up to 255 • Support for up to 256TB volumes • >2T supported since Server 2003 SP1 • Common Q: What is supported maximum transfer size? • Dependent on adapter/miniport (i.e. Qlogic/Emulex)

  5. Hyper-V Storage Parameters • VHD max size 2040GB • Physical disk size not limited by Hyper-V • Up to 4 IDE devices • Up to 4 SCSI controllers with 64 devices • Optical devices only on IDE

  6. Storage Connectivity • From parent partition • Direct Attached (SAS/SATA) • Fiber Channel • iSCSI • Network attached storage not supported • Except for ISOs • Hot add and remove • Virtual Disks to SCSI controller only

  7. ISOs on network Shares • Machine account access to share • Constrained delegation

  8. SCSI Support in VMs • Supported In • Windows XP Professional x64 • Windows Server 2003 • Windows Server 2008 & 2008 R2 • Windows Vista & Windows 7 • SuSE Linux • Not Supported In • Windows XP Professional x86 • All other operating systems • Requires integration services installed

  9. Antivirus and Hyper-V • Exclude • VHDs & AVHDs (or directories) • VM configuration directory • VMMS.exe and VMWP.exe • May not be required on core with no other roles • Run Antivirus in virtual machines

  10. Encryption and Compression • Bitlocker on parent partition supported • Encrypted File System (EFS) • Not supported on parent partition • Supported in Virtual Machines • NTFS Compression (Parent partition) • Allowed in Windows Server 2008 • Blocked in Windows Server 2008 R2

  11. Hyper-V Storage & Pass Through… Step by Step Instructions

  12. Hyper-V Storage... • Performance wise from fastest to slowest… • Fixed Disk VHDs/Pass Through Disks • The same in terms of performance with R2 • Dynamically Expanding VHDs • Grow as needed • Pass Through Disks • Pro: VM writes directly to a disk/LUN without encapsulation in a VHD • Cons: • You can’t use VM snapshots • Dedicating a disk to a vm

  13. More Hyper-V Storage • Hyper-V provides flexible storage options • DAS: SCSI, SATA, eSATA, USB, Firewire • SAN: iSCSI, Fibre Channel, SAS • High Availability/Live Migration • Requires block based, shared storage • Guest Clustering • Via iSCSI only

  14. VM Setting No Pass Through

  15. Computer Management: Disk

  16. Taking a disk offline

  17. Disk is offline…

  18. Pass Through Configured

  19. Disk Types & Performance

  20. Disk type comparison (Read)

  21. Hyper-V R2 Fixed Disks • Fixed Virtual Hard Disks (Write) • Windows Server 2008 R1: ~96% of native • Windows Server 2008 R2: Equal to Native • Fixed Virtual Hard Disks vs. Pass Through • Windows Server 2008 R1: ~96% of pass-through • Windows Server 2008 R2: Equal to Native

  22. Hyper-V R2 Dynamic Disks • Massive Performance Boost • 64 Sequential Write • Windows Server 2008 R2: 94% of native • Equal to Hyper R1 Fixed Disks • 4k Random Write • Windows Server 2008 R2: 85% of native

  23. Disk layout - FAQ Assuming Integration Services are installed: Do I Use • IDE or SCSI? • One IDE channel or two? • One VHD per SCSI controller? • Multiple VHDs on a single SCSI controller? • R2: Can Hot Add VHD’s to Virtual SCSI…

  24. Disk layout - results

  25. Differencing VHDsPerformance vs chain length

  26. Passthrough DisksWhen to use • Performance is not the only consideration • If you need support for Storage Management software • Backup & Recovery applications which require direct access to disk • VSS/VDS providers • Allows VM to communicate via inband SCSI unfiltered (application compatibility)

  27. Storage Device Ecosystem • Storage Device support maps to same support as exists in physical servers • Advanced scenarios: Live Migration require shared storage • Hyper-V supports both Fibre Channel & iSCSI SANs connected from parent • Fibre Channel SANs still represent largest install base for SANs and high usage with Virtualization • Live Migration is supported with storage arrays which have obtained the Designed for Windows Logo and which pass Cluster Validation

  28. Storage Hardware & Hyper-V • Storage Hardware that is qualified with Windows Server is qualified for Hyper-V • Applies to running devices from Hyper-V parent • Storage devices qualified for Server 2008 R2 are qualified with Server 2008 R2 Hyper-V • No additional storage device qualification for Hyper-V R2 =

  29. SAN Boot and Hyper-V • Booting Hyper-V Host from SAN is supported • Fibre Channel or iSCSI from parent • Booting child VM from SAN supported using iSCSI boot with PXE solution (ex: emBoot/Doubletake) • Must use legacy NIC • Native VHD boot • Boot physical system from local VHD is new feature in Server 2008 R2 • Booting a VHD located on SAN (iSCSI or FC) not currently supported (considering for future)

  30. iSCSI Direct • Microsoft iSCSI Software initiator runs transparently from within the VM • VM operates with full control of LUN • LUN not visible to parent • iSCSI initiator communicates to storage array over TCP stack • Best for application transparency • LUNs can be hot added & hot removed without requiring reboot of VM (2008 and 2008 R2) • VSS hardware providers run transparently within the VM • Backup/Recovery runs in the context of VM • Enables guest clustering scenario

  31. High Speed Storage & Hyper-V • Larger virtualization workloads require higher throughput • True for all scenarios • VHD • Passthrough • iSCSI Direct • Fibre Channel 8 gig & 10 Gig iSCSI will become more common • As throughput grows, requirements to support higher IO to disks also grows

  32. High Speed Storage & Hyper-V • Customers concerned about performance should not use a single 1 Gig Ethernet NIC port to connect to iSCSI storage • Multiple NIC ports & aggregate throughput using MPIO or MCS is recommended • The Microsoft iSCSI Software Initiator performs very well at 10 Gig wire speed • 10Gig Ethernet adoption is ramping up • Driven by increasing use of virtualization • Fibre Channel 8 gig & 10 Gig iSCSI becoming more common • As throughput grows, requirements to support IO to disks also grows

  33. Jumbo Frames • Offers significant performance for TCP connections including iSCSI • Max frame size 9K • Reduces TCP/IP overhead by up to 84% • Must be enabled at all end points (switches, NICs, target devices • Virtual switch is defined as an end point • Virtual NIC is defined as an end point

  34. Jumbo Frames in Hyper-V R2 • Added support in virtual switch • Added support in virtual NIC • Integration components required • How to validate if jumbo frames is configured end to end • Ping –n 1 –l 8000 –f (hostname) • -l (length) • -f (don’t fragment packet into multiple Ethernet frames) • -n (count)

  35. Management OS VM1 VM2 Virtual Machine Switch Routing VLAN Filtering Data Copy VM NIC2 VM NIC1 Port 1 Port 2 Miniport Driver VMBus NIC TCP/IP TCP/IP Ethernet Windows* 2008 Hyper-VNetwork I/O Path • Data packets get sorted and routed to respective VMs by the VM Switch

  36. Management OS VM1 VM2 Virtual Machine Switch TCP/IP VM NIC2 VM NIC1 Routing VLAN Filtering Data Copy Port 1 Port 2 Miniport Driver VM Bus Default Queue Q2 Q1 Switch/Routing Unit NIC TCP/IP Ethernet Windows Server 2008 R2 VMQ • Data packets get sorted into multiple queues in the Ethernet Controller based on MAC Address and/or VLAN tags • Sorted and queued data packets are then routed to the VMs by the VM Switch • Enables the data packets to DMA directly into the VMs • Removes data copy between the memory of the Management OS and the VM’s memory

  37. More than 25% throughput gain with VMDq/VMQ as VMs scale Intel tests with Microsoft VMQ Source: Microsoft Lab, Mar 2009 • Quad core Intel® server, Windows* 2008 R2 Beta, ntttcp benchmark, standard frame size (1500 bytes) • Intel® 82598 10 Gigabit Ethernet Controller • Near line rate throughput with VMDq for 4 VMs • Throughput increase from 5.4Gbps to 9.3Gbps *Other names and brands may be claimed as the property of others.

  38. Hyper-V Performance Improvements For virtual network interface and iSCSI in Windows 7 Hyper-V Parent R1/R2 Hyper-V 2008 R2 Child RSS TCP Chimney TCP Chimney LSO V1 LSO V1 LSO V2 LSO V2 Performance Benefits For iSCSI Direct Connections Jumbo Frames Jumbo Frames MPIO & MCS MPIO & MCS

  39. Enterprise Storage Features Performance Manageability Scalability • iSCSI digest offload • iSCSI Increased Performance • MPIO New Load Balancing algorithm • iSCSI Quick Connect • Improved SAN Configuration and usability • Storage Management support for SAS • Storport support for >64 cores • Scale up storage workload • Improved scalability for iSCSI & Fibre Channel SANs • Improved Solid State disk performance (70% reduction in latency) Automation Diagnosability Reliability • MPIO Datacenter Automation • MPIO automate setting default load balance policy • Additional redundancy for Boot from SAN – up to 32 paths • Storport error log extensions • Multipath health & statistics reporting • Configuration reporting for MPIO • Configuration reporting for iSCSI

  40. iSCSI Quick ConnectNew in Windows 7/Windows Server 2008 R2

  41. High Availability with Hyper-V using MPIO & Fibre Channel SAN Clients • In Hyper-V Fibre Channel LUNs Supported as • Passthrough Disk • Connect from parent, map to VM • VM formats with NTFS • VHD • Connect from Hyper-V host • Format with NTFS from host • Create VHDs for each guest Fabric/Fibre Channel Network Windows Server Hosts Switches VHDs LUNs

  42. MCS & MPIO with Hyper-V • Provides High Availabilty to storage arrays • Especially important in virtualized environments to reduce single points of failure • Load balancing & fail over using redundant HBAs, NICs, switches and fabric infrastructure • Aggregates bandwidth to maximum performance • MPIO supported with Fibre Channel , iSCSI, Shared SAS • 2 Options for multi-pathing with iSCSI • Multiple Connections per Session • Microsoft MPIO (Multipathing Input/Output) • Protects against loss of data path during firmware upgrades on storage controller

  43. Configuring MPIO with Hyper-V • MPIO • Connect from parent • Applies to: • Creating vhds for each VM • Passthrough disks • Additional sessions to target can also be added through MPIO directly from guest • Additional connections can be added through MCS with iSCSI using iSCSI direct

  44. iSCSI Perf Best Practices with Hyper-V • Standard Networking & iSCSI best practices apply • Use Jumbo Frames • Use Dedicated NIC ports for • iSCSI traffic (Server to SAN) • Multiple to scale • Client  Server (LAN) • Multiple to scale • Cluster heartbeat (if using cluster) • Hyper-V Management

  45. Hyper-V Enterprise Storage Testing Performance Configuration • Windows Server 2008 R2 Hyper-V • Microsoft MPIO • 4 Sessions • 64K request size • 100% read • Microsoft iSCSI Software Initiator • Intel 10 Gb/E NIC • RSS enabled (applicable to parent only) • Jumbo Frames (9000 byte MTU) • LSO V2 (offloads packets up to 256K) • LRO • Hyper-V Server 2008 R2 • NetApp FAS 3070

  46. Configuring Hyper-V for Networking & iSCSI

  47. Hyper-V Networking • Two 1 Gb/E physical network adapters at a minimum • One for management • One (or more) for VM networking • Dedicated NIC(s) for iSCSI • Connect parent to back-end management network • Only expose guests to internet traffic

  48. Hyper-V Network Configurations • Example 1: • Physical Server has 4 network adapters • NIC 1: Assigned to parent partition for management • NICs 2/3/4: Assigned to virtual switches for virtual machine networking • Storage is non-iSCSI such as: • Direct attach • SAS or Fibre Channel

  49. Hyper-V Setup & Networking 1

More Related