450 likes | 594 Views
View Sizing and Best Practices. From. Agenda. VDI Sizing Methodology Study the Worker Profile Design Server and Storage Architecture Assess Performance Server and Storage Sizing Best Practices Add-on View 3 Reference Architecture. VDI Sizing Methodology- Study the Worker Profile.
E N D
Agenda VDI Sizing Methodology Study the Worker Profile Design Server and Storage Architecture Assess Performance Server and Storage Sizing Best Practices Add-on View 3 Reference Architecture
VDI Capacity Planning • Proper sizing is critical to successful VDI project • Affects TCO/ROI • Affects end-user experience • VDI technology advancing at rapid rate • Sizing considerations changing rapidly
Step 1: Measure Physical Desktop Usage • Study usage patterns of physical desktop (perfmon) • VMware Capacity Planner can also be used • We measured typical desktop running common apps (Word, Excel, PPT, Adobe, IE, VirusScan)
Step 2: Estimate CPU Requirements • Examine physical usage patterns • Perfmon data provides CPU requirements from physical desktop • Observed 130MHz average on physical desktop • Targeting 64 virtual desktops per server (8 VMs per core) • 64 VMs x 130MHz = 8.3 GHz • Additional considerations when virtualizing • Storage virtualization • Network virtualization • Connection protocol • Additional headroom for spikes
Step 3: Estimate Memory Requirements Examine physical usage patterns Memory allocated: 512MB Memory consumed: 258MB Additional Considerations when virtualizing ESX can reduce memory requirements due to ESX page sharing Windows XP can be reduced to 125MB footprint on ESX Similar reductions on ESX from common applications Memory footprint of four idle VMs quickly decreased to 800MB. (Vista has larger memory footprint.) Memory footprint of four idle VMs quickly decreased to 300MB due to aggressive page sharing.
Step 3: Estimate Memory Requirements (Con’t) • High watermark (no page sharing) • 64 VMs * 512MB = 32GB • Low watermark: • 64 VMs * 125MB (Win XP) = 8GB • 64 VMs * (15MB Word) = 1GB • 64 VMs * (15MB Excel) = 1GB • 64 VMs * (10MB PowerPoint) = 1GB • 64 VMs * (Adobe, IE, Winzip) = 3GB • 64 VMs * (VM memory overhead) = 3GB • Low watermark 17GB • Potential range from 17GB – 32GB
Step 4: Estimate Network Requirements Examine physical usage patterns Estimated traffic / NIC speed = # of NICs Perfmon showed 245 bytes/sec average on physical desktop 64VMs x 245 bytes/sec = 3920 bytes/sec Additional considerations when virtualizing Remote display protocol Shared, redirected folders (My Documents etc.) Printing Multimedia Multi-port NICs, bus speed (PCI, PCI-X)
Step 5: Estimate Storage Requirements (Capacity) (Size of vmdk) + (VM RAM) + (suspend/resume) + 100MB per VM for logs) • Sample calculation for 64 VMs (32 VMs per LUN): • 32 VM * 10GB per VM (vmdk) = 320GB • 32 VM * 512MB (VM RAM) = 16GB • 32 VM * 512MB (suspend/resume) = 16GB • 32 VM * 100MB (logs) = ~4GB 356GB • 356GB + 15% free space = 410GB • 410GB x 2 LUNs (32 VMs per LUN) = 820GB • **Does not include space for user data
Step 5b: Estimate Storage Requirements (Performance) • Examine physical usage patterns • Perfmon showed 5 IOPS and 115KBps (average) 64 VM x 5 IOPS = 320 IOPS 64 VM x 115 KBps = 7360 KBps • Additional considerations when virtualizing • Other systems/VMs sharing same spindles • VMware ESX disk I/O • Boot periods, desktop search, defrag, virus scan etc
Summarizing the Worker Profile • Minimum estimated CPU: • 8.3GHZ + virtualization (typically 5-10%) • Minimum estimated memory: • 17-32GB (dependent on page sharing across VMs) • Minimum estimated network: • 16KBps + virtualization • Minimum estimated storage: • 820GB • 320 IOPS / 7360 KBps **Based on study of physical desktop
Server Architecture Considerations • Traditional rack-mount or blade? • Blades offer smaller datacenter footprint • Higher up-front cost • PCI slots (quantity, PCI-X/PCI-E, redundancy) • CPU/Memory • Target CPU utilization (typically 65-80%) • Processor family (VMotion compatibility) • # of sockets/cores vs. qty/cost of RAM • Impact of single server failure
Storage Architecture Considerations Protocol (FC, iSCSI, NFS) Existing infrastructure Tiered solutions (i.e. FC for system drives, NAS for data drives) How many virtual machines per LUN? 30-40 .vmdk for average I/O VMs 15-20 .vmdk for heavy I/O VMs Highly dependant on storage array: http://www.vmware.com/resources/techresources/1059 Disk drive rotational speed and capacity Read/write mix, RAID type VMware ESX boot-from-SAN
Comparing Disk Drive Models (Speed/Capacity) Calculating required spindles – example 1 (146GB 10K drive): 820GB / 146GB (RAID 5) = 7 drives (approx) 320 IOPS / 130 IOPs per drive = 3 drives (approx) Capacity requires 7 drives • Calculating required spindles – example 2 (500GB SATA drive): • 820GB / 500GB (RAID 5) = 3 drives (approx) • 320 IOPS / 60 IOPs per drive = 6 drives (approx) • Performance requires 6 drives Sample disk drive performance stats:
Data Center Architecture Considerations • Datacenter design considerations • Balance cost, # of VMs, and # of VMware ESX to manage • Use a dedicated ESX cluster for VDI? • Integrate into existing cluster? • Configuration maximums • Max 128 vCPUs per VMware ESX • Max 20 vCPU per core • Max 32 servers per HA/DRS cluster • 256 VMFS volumes per server • 200 ESX hosts per Virtual Center server • 2000 virtual machines per Virtual Center server http://www.vmware.com/pdf/vi3_35/esx_3/r35/vi3_35_25_config_max.pdf
Virtual Desktop Considerations • Choosing an operating system • Windows XP (512MB or higher) • Vista (1GB or higher) • Choosing resource allocations • # of vCPU • Quantity of RAM • Choosing Storage requirements • System drive capacity (GB) • Data drive capacity (GB)
Sample Building Block • PowerEdge 2950 • 2 x quad-core Xeon 2.7GHz • 32GB RAM • ESX 3.5.0 Update 2 • Dell PS5000X • 146GB 10K drives • RAID 5 • 32 VMs per public NIC • 32 VMs per s/w iSCSI LUN • Windows XP SP2
Simulating VDI Workloads • Capacity planning is difficult but critical • True human activity is difficult to replicate • Requirements for simulating VDI workloads • Must be repeatable • Must be measurable • Must be scalable • Must closely resemble human PC usage patterns • Must be tunable via parameters
Workload Based on AutoIT Freeware for simulating Windows GUI http://www.autoitscript.com/autoit3/ Executes the following: Word (open, modify random pages, save and close) Excel (writing to cells, sorting, formulas, charts) PowerPoint (slideshow and edit slides) Adobe Acrobat (Open and browse random pages) Internet Explorer (browse plain-test and web album) Winzip (install/uninstall) McAfee VirusScan (continuous on-access scan)
Assess Performance • Tools for performance monitoring • Esxtop • vscsiStats • PS Series Monitor (beta) • Areas to watch for: • CPU: % Processor time • Memory: Free Mbytes, page sharing, swapping • Network: Bytes Transferred • Disk: Reads and Writes/Sec (IOPS), throughput (MB/s)
CPU observations – Steady State • 8 cores @ 2.67GHz • Averaged approximately 55% • High spikes observed • s/w iSCSI: 55% • h/w iSCSI 49% • Headroom available • **Does not include display protocol
Memory Observations – Steady State • 32GB physical RAM • 64 VMs (512MB RAM each) • Identical OS/apps • Average consumed: 19GB • Page sharing: 13GB • No ESX swapping observed • Page sharing increased/decreased as common apps were opened/closed
Storage Observations Steady State • Bursty disk I/O • Mainly due to opening/closing apps • IOPS (Estimated 320) • 185 average • 650 peak • Throughput (Estimated 7360): • 3530 average • 13733 peak • Potential bottleneck if not sized correctly
Measuring In-guest Timing • AutoIT allows in-guest measurements • Helps to validate end-user experience • Shows scalability as more virtual desktops are added
Sizing Best Practices - Server • CPU recommendations • Dual socket (quad-core) offers nice balance on cost/performance/ROI • Current max of 128 vCPUs per ESX host • Increasing in upcoming ESX release • Memory recommendations • Maximize page sharing (same OS/applications and version) • Effectively leverage over-commit, avoid swapping • Plan for virtual machine memory overhead • 84MB for each 32-bit VM with 1GB RAM
Sizing Best Practices - Storage • Base capacity formula: • (Size of vmdk) + (VM RAM) + (suspend/resume) + 100MB per VM for logs) • Does not include data drives • Understand the workload: • Relatively low I/O in normal operation • Our workload showed mostly 4KB random I/O • Plan for spikes: • OS patching - stagger, off-hours • Manual virus scan – use on-access scans • Desktop search • Defrag • Cold boot, suspend/resume
Sizing Best Practices - Storage • Understand disk drive technology in VDI solutions • Larger capacity drives typically offer lower performance • Compounded during boot storm, virus scan storm etc • Sizing for peaks vs. averages • SLAs influence design choices • Size for capacity (GB) and performance (IOPS, throughput) • Light I/O workload makes s/w iSCSI viable • Note: No ESX boot-from-SAN with s/w iSCSI • Engage your storage admin in the design process • Perform storage admin tasks during off-hours
Sizing Best Practices – Storage • Minimize storage footprint • Separate applications from OS (ThinApp) • nLite and vLite to streamline guest OS • Thin provision storage • Virtual machines or entire datastore • Single instancing (de-duplication) • Array-based snapshots • Desktop Composer
Sizing Best Practices – Guest OS • Remove unnecessary services, device drivers, add-on’s • nLite (XP), vLite (Vista) to optimize the OS build • Disable graphical screen savers • Desktop images should be disposable • Use centralized file storage • Redirect Application Data. Cookies, Favorites, Templates • Roaming/virtualized profiles • Use LSILogic SCSI driver … http://www.vmware.com/files/pdf/XP_guide_vdi.pdf
Sizing Best Practices - Performance • ESX Best Practices for Performance • http://www.vmware.com/pdf/vi_performance_tuning.pdf • Disable USB, COM, CD, floppy etc (guest and host) • Deploy SMP guests sparingly • Additional vCPUs add overhead • Over-commit memory can be effective with VDI • Avoid ESX swapping • Monitor performance through esxtop
Future Work … • Studying RDP overhead • Integration with View Manager Server • Integrating additional applications • Outlook (cached mode, online mode) • Scalable Virtual Images (SVI) • Vista
View3 Reference Architecture http://www.vmware.com/resources/wp/view_reference_architecture_register.html
Standard Scalable Validated The Goal is to provide Components Quicker implementation, reduced costs, and minimized risk
Infrastructure 8 VMs /core 64 VMs / LUNs 7 LUNs / Clusters
Q&A Questions ?