320 likes | 579 Views
Davie 5/18/2010. RAID. Tech Symposium. Thursday, May 20 th @ 5:30pm Ursa Minor Co-sponsored with CSS Guest Speakers Dr. Craig Rich – TBA James Schneider – Cal Poly “State of the Network” Address Sean Taylor – Reverse Engineering for Beginners Free food!. Fragnite.
E N D
Davie 5/18/2010 RAID
Tech Symposium • Thursday, May 20th @ 5:30pm • Ursa Minor • Co-sponsored with CSS • Guest Speakers • Dr. Craig Rich – TBA • James Schneider – Cal Poly “State of the Network” Address • Sean Taylor – Reverse Engineering for Beginners • Free food!
Fragnite • Friday, May 21st @ 5:30PM • 98P 2-007 (You better know where this is!) • Games • Starcraft, TF2, FEAR, Bad Company 2 • Linux, GotY Edition • Consoles welcome • Music • Free food
CSS Meeting • Wednesday, May 19th @ 1:00pm • Sean McAllister • Structured Exception Handling
RAID Redundant Array of Inexpensive | Independent Disks
What is RAID • Combining multiple physical devices to achieve increased performance and/or reliability • Added benefit of a single, large device
What RAID isn’t • A backup solution. • End of story. • Stop arguing. • You’re stupid.
Key Concepts • RAID functions by combining three concepts to achieve desired results • Striping – Splitting data across multiple disks to maximize I/O bandwidth • Mirroring – Storing a copy of the data across multiple disks to guard against drive failure • Error-correction – Parity calculations to find and repair bad data. Also used to distribute data across drives
Terms • Array – collection of disks that operate as one • Degraded array – array where a component disk is missing, but the array can still function • Failed array – array where enough disks are missing to prevent all functionality • Hot spare – extra disk that will allow a degraded array to repair itself • Won’t help failed arrays though • Reshape – modify array size or level
RAID Levels • Levels 0-6 • Nested RAID • Combines two levels • Just a Bunch Of Disks (JBOD) & Spanning • Concatenates one disk to the end of the other • No performance or reliability improvements
RAID 0 • Data is striped across multiple disks • Minimum of two • No redundancy • Lose one disk, lose all data • High read/write throughput • Disks can read or write simultaneously without costly parity calculation • Results in array of size n • Difficult to reshape, and therefore expand
RAID 1 • Data is mirrored across multiple disks • Minimum of two • Full redundancy • Lose all but 1 disk, data still good • High read, low write throughput • Read different simultaneously • Write same data multiple times • Results in array of size 1 • Can be reshaped to RAID 5
RAID 2, 3, & 4 • I don’t bother with these • Neither should you • RAID 2: Sounds like CS magic • http://en.wikipedia.org/wiki/RAID_2#RAID_2 • RAID 3 & 4: Striped with a single disk for parity • Use RAID 5 or 6 instead
RAID 5 • Data is striped across multiple disks • Minimum of three (unless you’re retarded like me) • Parity is calculated and distributed • Lose any 1 disk, parity allows it to be regenerated • Increased overhead • All reads and writes require calculations • Results in array of size n-1 • Can be reshaped to RAID 6
RAID 6 • Data is striped across multiple disks • Minimum of three (unless you’re retarded like me) • Parity is calculated and distributed • Lose any 2 disks, parity allows regeneration • Increased overhead • All reads and writes require calculations • Results in array of size n-2 • Can be reshaped to RAID 5
Nested RAID: 01 or 10 • Combines striping and mirroring • May tolerate multiple failures • But specific combination of failures may ruin array • Extremely inefficient space usage
Hardware RAID • Dedicated CPU & RAM • May include battery for cache • High throughput for I/O • Data on disk may be vendor-specific and not portable to other controllers (Controller died? Better have an exact replacement!) • OS sees a single device from the BIOS, but may require additional driver
Software RAID • Relies on host CPU for all calculations • No battery for cache • OS level drivers allow for maximum portability (within OS families of course) • Native to Linux kernel (Woooo!) • Windows, BSD, Solaris, OSX all have support
Fake RAID • Looks and acts like hardware RAID • OS sees single BIOS device • Requires vendor-specific driver • Performs like software RAID • Relies on host CPU & ram • No cache battery
Now What? • Add filesystems OR • Use Logical Volume Management (LVM) • Then add filesystems • Create a storage server • Media • Backups
Logical Volume Management • Physical Volumes (PV) • Disks, partitions, arrays • Volume Group (VG) • Combines PVs into single pool of space • Logical Volumes (LV) • Create LVs inside the VG that act like partitions • Don’t need to be continuous • Can be added or resized at will