160 likes | 529 Views
VxFS & VxVM on Linux “Giving Linux enterprise strength”. Hans van Rietschote +1-650-318-4066 hans@veritas.com. VERITAS. Data Availability Software-only products > $1B revenue in 2000 > 4500 people HQ in Silicon valley USA
E N D
VxFS & VxVM on Linux“Giving Linux enterprise strength” Hans van Rietschote +1-650-318-4066 hans@veritas.com
VERITAS • Data Availability Software-only products • > $1B revenue in 2000 • > 4500 people • HQ in Silicon valley USA • Engineering locations all over USA, Pune and London (Linux Kernel team) • Always looking for Storage SW engineers http://www.veritas.com/jobs
VxFS and VxVM background • Developed in the late 80s / early 90s • Original platform was UNIX System V Release 4.0 • Now ported to many UNIX variants and non-UNIX environments (Solaris, HP-UX, NT, W2K, Linux, AIX) • First choice for many customers running in heterogeneous environments (VxFS/VxVM are the same everywhere!)
VxFS main features(1) • Journaling: to avoid a full fsck. • Small amounts of data can be in intent log • Fixed & variable sized Extent based allocation • On-line grow and shrink • on-line administration (extent & directory re-organization, defrag) • Snap-shots • Clustered file-system (direct multi-host access)
VxFS main features(2) • Sys Admin Control over: • allocation policies, • mount options, • caching advisories • and many other I/O policies. • Raw I/O performance through Quick I/O • greater than raw I/O performance through cached Quick I/O • Intent logs can be on separate device (extra speed if on solid state disks)
VxVM Terminology • subdisk: part of a disk • plex: logical object = one or more subdisks. • volume: logical object = one or more plexes. Accessed just as you would a normal disk or other device, where the data lives • disk groups: A collection of VxVM managed disks. can be deported from one machine and imported to another machine with no special handling.
VxVm Volume layouts(1) • concatenated: subdisks combined in a linear (non-contiguous) fashion. No redundancy. • spanning: a concatenation across multiple disks. No redundancy. • mirror(RAID1): data is mirrored to each (up to 32) plex. Redundancy & speed • stripe (RAID0): data is interleaved across two or more plexes. No redundancy, better throughput.
VxVm Volume layouts(2) • RAID5: RAID5 done in software, striping with a parity column. Redundancy. • mirror-stripe: two or more mirrors that are striped. Better throughput with redundancy. As long as one valid plex remains, data is available. • stripe-mirror: each column of the stripe is mirrored. This allows a finer failure granularity than mirror-stripe.
VxVM Features(1) • All VxVM objects recognized at boot • All configuration changes are done transactionally, can be reversed/resumed • Online relayout/resize • Hot (un)relocation: a spare pool of disks can be setup, so that if a failure occurs, the disk is replaced automatically. • dirty region log for mirrors: a log kept of regions that have been written on a mirror. This allows only those regions to be re-synced following system failures.
VxVM Features(2) • RAID5 log: a log of data and parity kept until writes actually reach the disks. Prevents corruption when a disk and the system fail. • fast mirror resync: provides faster resyncronization of mirrors through use of a log area. • rootability: put root disk under VxVM, so can be mirrored • dynamic multi-pathing: provides a single access point and management of multi-pathed disk arrays.
VxVM Features(3) • clustered volume manager: provides access through a cluster of machines, to shared volumes. As long as one node is up, data is available. • Java-based GUI. Same GUI across all platforms. Server and client can run independently.
Linux port goals • Only support 2.4.x due to Linux-VM/VFS differences (important for VxFS to emulate page cache style OSs) • Retain existing platform independence (VxFS/VxVM has a common source base with platform dependent layer) • Avoid requiring changes to the Linux kernel wherever possible (allows easier migration to future releases)
Linux Port status • Started with 2.3.x in October 1999 • Currently using 2.4.0-test9 • Main implementation is complete. Both considered at beta stage. • (No) changes required to base kernel • Fragmentation problems encountered with kernel memory allocator and out of memory problems (killing processes).
Commercial and non-commercial do they fit? • There is room for both. Open source solutions for low to middle end of the market and commercial solutions for the mid to high end market • Many high-end customers asking for VERITAS products since they operate heterogeneous environments and want the same storage software on all machines • Customers also demand 24x7 enterprise class support • Sharing disks between different OS’s
Roadmap going forward • Currently awaiting 2.4 Linux kernel release • VERITAS Linux based Storage Appliances • Different pricing models being considered. For example a free "Lite" version as per other operating systems (e.g. HP-UX, W2K). • Other VERITAS products currently being ported, NetBackup and VCS. “Giving Linux enterprise strength”