270 likes | 376 Views
Improving File System Performance in a Virtual Environment. Virtualization Deep Dive Day 2/20/2009 Bob Nolan Raxco Software bnolan@perfectdisk.com. Topic Background. Hardware is getting bigger and faster CPU clock speeds > 3+Ghz Multi-core 4GB RAM 500GB-2TB+ capacity hard drives
E N D
Improving File System Performance in a Virtual Environment Virtualization Deep Dive Day 2/20/2009 Bob Nolan Raxco Software bnolan@perfectdisk.com
Topic Background • Hardware is getting bigger and faster • CPU clock speeds > 3+Ghz Multi-core • 4GB RAM • 500GB-2TB+ capacity hard drives • Limiting factor is still disk I/O • Anything that speeds up access to disk improves performance
Virtualization • Uses host resources to run virtual guests • Multiple guests can strain host resources and impact performance • Failure to optimize resources can be crippling • Proactive problem management is best approach
NTFS in a Virtual Machine • Uses slightly more resources in a VM • Maintains a bitmap of VM disk • Allocates free space • Fragments files and free space • Degrades performance with use
Logical vs Physical Clusters • Logical Clusters • File system level • Every partition starts at logical cluster 0 • No idea of hard drive technology in use • IDE • SCSI • RAIDx • # platters or read/write heads
Logical vs Physical Clusters • Physical Clusters • Hard drive level • Hard drive controller translates logical-to-physical and positions heads
Cluster Size and Performance • Smaller clusters • less wasted space • Worse performance – especially large files • Larger clusters • more wasted space • Better performance – especially large files
Fragmentation Causes • What causes fragmentation? • Occurs when files are created, extended or deleted • Happens regardless of how much free space is available (After XP/SP2 installation – 944 files/2943 fragments) • More than one Logical I/O request has to be made to the hard drive controller to access a file
Fragmentation Impacts • What does fragmentation do to my system? • Slows down access to files • Extra CPU/Memory/Disk resource usage • Some applications may not run • Slow system boot/shutdown • Audio/Video record/playback drops frames or “skips”
Measuring Impact of Fragmentation • Measuring the performance loss in reading a fragmented file
Defragmenting - Results • What does defragmenting do? • Locates logical pieces of a file and brings them together • Faster to access file and takes less resources • Improves read performance • Consolidates free space into larger pieces • New files get created in 1 piece • Improves write performance
Measuring Impact of Fragmentation • Measuring the performance difference in reading a contiguous file
Defragmenting - Issues to Consider • Free Space • How much is enough? • Where is free space located? • Inside MFT Reserved Zone • Outside of MFT Reserved Zone • Consolidation of free space
Advanced Defrag Technology • Complete Defrag of All Files • Free Space Consolidation • Single Pass Defragmentation • File Placement Strategy • Free Space Requirement • Minimal Resource Usage • Large Drive Support • Easy to Schedule and Manage • OS Certification • Robust/Easy Reporting
Defrag Completeness • Data Files • Directories • System Files • Pagefile • Hibernate File • NTFS metadata
Free Space Consolidation • Allows new files to be created contiguously • Maintains file system performance longer • Requires less frequent defrag passes • Reduces split I/O’s
Free Space Consolidation • Defragmenting files improves read performance • Free space consolidation improves write performance • Reduces wasted seeks by over 50%
CaseStudy-Auto Company Problems • Overall poor workstation performance • Slow boot times • Increased help desk calls • Increased backup time on servers
Case Study ROI • 4000 Windows XP workstations • 400 servers • $30/hr end user cost • $40/hr system admin/help desk cost • Saved 20 seconds per day per Ws • Reduced help desk by 20% (800 hrs. ann.) • Cut backup time 65%
CaseStudy ROI • Saved 22 hrs per day- 4840 hrs annually • $145,200 annual productivity savings • $32,000 help desk savings • ~$20,000 backup savings • 66 days to recover investment • Proactively maintains optimal disk performance
Conclusion • To improve file system/drive performance • Use appropriate disk technology • Use the most appropriate file system • Use the most appropriate cluster size • Align on cluster boundaries • Make sure free space is consolidated • When you defragment, make sure that it is being done effectively.
Resource Usage • Run in the background • Low Memory Usage • Low CPU Usage
Volume Shadow Copy (VSS) • VSS and defragmentation • Multiple of 16k cluster size • Default cluster size is 4k because NTFS compression hasn’t been modified to support greater than 4k cluster size. • BitLocker (Vista) also restricted to 4k cluster size
DiskPar/DiskPart • Want to avoid crossing track boundaries • Align on 64k for best MS SQL performance • Win2k8 – default is 64k when creating volumes • Contact storage vendor – i.e. EMC recommends 64k
Performance Measuring Tools • Windows Performance Monitor • Split I/O Count (fragmentation) • Disk Queue Length (<= 2/spindle) • hIOMon – www.hiomon.com • Device AND File based metrics • SQLio – Microsoft • Stress Test I/O subsystem
ClusterSize Recommendations * You can’t use compression if cluster size greater than 4k