230 likes | 243 Views
Learn how to optimize your use of Foundation Products, including Volume Manager (VxVM) and File System (VxFS), with new features, performance considerations, and monitoring techniques. This comprehensive guide covers software vs. hardware RAID, volume manager objects, RAID configurations, file system structure, administration, and more. Stay ahead of the curve with the latest tips and strategies for getting the most out of your Foundation Products.
E N D
Getting the Most Out of Your Foundation Products Angela Feeney November 1999
Agenda • Volume Manager (VxVM) • Software vs. Hardware RAID • New features in VM 3.0.x • Volume Manager objects • RAID 0+1 vs. 1+0 • Dirty region logging • RAID-5 logging • VxFS • File system comparison • File system structure • file system administration • Performance consideration and monitoring • VxFS and VxVM together • Questions
Software vs. Hardware RAID Software (vxvm) Hardware (disk array) yes no Online reconfig Ability to stripe across multiple controllers Parity calculation heterogeneous disks within one disk group Easily managed through GUI Offloads system resources yes no host based array based yes no yes no no yes
New VM 3.0.x Features • RAID-5snapshot- create a snapshot of RAID-5 volume to perform backups • RAID 1+0 - allows continuous access to a volume if a subdisk in each plex fails • Relayout- change layout of plexes while volumes are online and active • Task manager- ability to monitor volume recovery
Volume Manager Objects • VM disk- virtual disk that points to physical disk • Subdisk - contiguous region of storage • Plex- organizes one or more subdisks; layouts (simple, striped, RAID-5) • Volume- virtual partition • (OS) #newfs /dev/rdsk/c1t2d0s3 • (VM) #newfs /dev/vx/rdsk/rootdg/vol01 • Disk group- collection of disk that create a disk pool
Stripe Setup • Column - the number of physical spindles in which data can be written. • Stripe unit size - determines how much data is written to on spindle (column) before proceeding to the next one.
RAID 0+1 vs. RAID 1+0 V In 0+1 stripe is created then mirrored. If any column in plex is lost plex is unusable. If column in other plex is lost volume fails. col1 col1 co12 col2 col3 col3 col4 col4 /f.s.
RAID 0+1 vs. RAID 1+0 V In 1+0 each column is mirrored then striped. If any column in plex is lost, data is obtained from other plex. If column in good plex is then lost data is obtained from good column on other plex col1 col1 co12 col2 col3 col3 col4 col4 p2 p1 /f.s.
Dirty Region Logging Used on mirrors for faster reboots 1) Volume is written to 2) p1 is updated;disk in p2 is busy 3) system crashes before p2 is updated 4) during recovery the mirrors must be resynced before fsck V p1 p2 Write request
RAID-5 logging D D D P 0 1 1 0 1 0 Used for data integrity. Verifies any outstanding transactions
File System comparison VxFS Standard ufs Inodes Reboot on dirty file systems Resizing Extended attributes Defragmentation online Dynamically allocated Fixed based on file system size fsck must examine whole file system Fast;intent log replay Backup and restore required Online; shrink or grow DMAPI compliant Need additional inode table no yes
File System Structure Inodes : Standard ufs are created based on the size of the file system example: 100gb file system size 100g/2k=50k 50k*128bytes=space for inodes VxFS dynamically allocates and only creates when needed
File System Structure • Resizing • Standard ufs can only resize by dumping, recreating partition, and restoring • VxFS allows resize while online providing room available on disk or disk group (you can shrink too!)
File System administration • fsadm - command to perform online administration like resizing and defragmentation. • vxtunefs - command can be used to view and change tunable attributes for specified mounted file systems
Performance Considerations • Access type • sequential - make the stripe unit size small relative to typical I/O request size • random - make the stripe unit size large relative to typical I/O request size
Performance Monitoring • Disk monitoring • balancing of I/O on each drive with respect to seek times and data transfer rates • usually involves moving subdisks to other physical disks by evacuating data • vxstat and iostat # iostat -x 5 • extended device statistics • device r/s w/s kr/s kw/s wait actv svc_t %w %b • sd0 0.7 0.3 6.2 2.8 0.0 0.0 60.0 1 2 • sd2 0.0 0.0 0.2 0.0 0.0 0.0 19.7 0 0
Performance Monitoring • svc_t < 50ms • how long it takes to process request including queue time and disk time • %b < 35 • percentage of the time disk is busy to process request
Performance Monitoring vxsd - allows moving data from a busy disk to a not so busy disk #vxsd mv old_sd_name new_sd_name #vxsd mv disk01-01 disk05-01
Integration VxFS and VxVM together • VxFS performance increases as I/O aligns to the underlying device characteristics • VxFS’s mkfs automatically aligns allocation units on 64K boundaries • If the underlying device is a VxVM Volume, mkfs queries the geometry of the underlying device to determine the correct alignment and set the I/O parameters
Services, Support & Training Foundation Products File System & Volume Manager The VERITAS Product Family Management Tools High Availability & Clustering Backup, HSM & VML Editions