1.02k likes | 1.18k Views
Beyond the File System. Designing Large Scale File Storage and Serving Cal Henderson. Hello!. Big file systems?. Too vague! What is a file system? What constitutes big? Some requirements would be nice. 1. Scalable Looking at storage and serving infrastructures. 2. Reliable
E N D
Beyond the File System Designing Large Scale File Storage and Serving Cal Henderson
Big file systems? • Too vague! • What is a file system? • What constitutes big? • Some requirements would be nice
1 Scalable Looking at storage and serving infrastructures
2 Reliable Looking at redundancy, failure rates, on the fly changes
3 Cheap Looking at upfront costs, TCO and lifetimes
Four buckets Storage Serving BCP Cost
File protocol NFS, CIFS, SMB File system ext, reiserFS, NTFS Block protocol SCSI, SATA, FC RAID Mirrors, Stripes Hardware Disks and stuff The storage stack
Hardware overview The storage scale
Internal storage • A disk in a computer • SCSI, IDE, SATA • 4 disks in 1U is common • 8 for half depth boxes
DAS Direct attached storage Disk shelf, connected by SCSI/SATA HP MSA30 – 14 disks in 3U
SAN • Storage Area Network • Dumb disk shelves • Clients connect via a ‘fabric’ • Fibre Channel, iSCSI, Infiniband • Low level protocols
NAS • Network Attached Storage • Intelligent disk shelf • Clients connect via a network • NFS, SMB, CIFS • High level protocols
Meet the LUN • Logical Unit Number • A slice of storage space • Originally for addressing a single drive: • c1t2d3 • Controller, Target, Disk (Slice) • Now means a virtual partition/volume • LVM, Logical Volume Management
NAS vs SAN With a SAN, a single host (initiator) owns a single LUN/volume With NAS, multiple hosts own a single LUN/volume NAS head – NAS access to a SAN
SAN Advantages Virtualization within a SAN offers some nice features: • Real-time LUN replication • Transparent backup • SAN booting for host replacement
Some Practical Examples • There are a lot of vendors • Configurations vary • Prices vary wildly • Let’s look at a couple • Ones I happen to have experience with • Not an endorsement ;)
NetApp Filers Heads and shelves, up to 500TB in 6 Cabs FC SAN with 1 or 2 NAS heads
Isilon IQ • 2U Nodes, 3-96 nodes/cluster, 6-600 TB • FC/InfiniBand SAN with NAS head on each node
Scaling Vertical vs Horizontal
Vertical scaling • Get a bigger box • Bigger disk(s) • More disks • Limited by current tech – size of each disk and total number in appliance
Horizontal scaling • Buy more boxes • Add more servers/appliances • Scales forever* *sort of
Storage scaling approaches • Four common models: • Huge FS • Physical nodes • Virtual nodes • Chunked space
Huge FS • Create one giant volume with growing space • Sun’s ZFS • Isilon IQ • Expandable on-the-fly? • Upper limits • Always limited somewhere
Huge FS • Pluses • Simple from the application side • Logically simple • Low administrative overhead • Minuses • All your eggs in one basket • Hard to expand • Has an upper limit
Physical nodes • Application handles distribution to multiple physical nodes • Disks, Boxes, Appliances, whatever • One ‘volume’ per node • Each node acts by itself • Expandable on-the-fly – add more nodes • Scales forever
Physical Nodes • Pluses • Limitless expansion • Easy to expand • Unlikely to all fail at once • Minuses • Many ‘mounts’ to manage • More administration
Virtual nodes • Application handles distribution to multiple virtual volumes, contained on multiple physical nodes • Multiple volumes per node • Flexible • Expandable on-the-fly – add more nodes • Scales forever
Virtual Nodes • Pluses • Limitless expansion • Easy to expand • Unlikely to all fail at once • Addressing is logical, not physical • Flexible volume sizing, consolidation • Minuses • Many ‘mounts’ to manage • More administration
Chunked space • Storage layer writes parts of files to different physical nodes • A higher-level RAID striping • High performance for large files • read multiple parts simultaneously
Chunked space • Pluses • High performance • Limitless size • Minuses • Conceptually complex • Can be hard to expand on the fly • Can’t manually poke it
Real Life Case Studies
GFS – Google File System • Developed by … Google • Proprietary • Everything we know about it is based on talks they’ve given • Designed to store huge files for fast access
GFS – Google File System • Single ‘Master’ node holds metadata • SPF – Shadow master allows warm swap • Grid of ‘chunkservers’ • 64bit filenames • 64 MB file chunks
GFS – Google File System Master 1(a) 2(a) 1(b)
GFS – Google File System • Client reads metadata from master then file parts from multiple chunkservers • Designed for big files (>100MB) • Master server allocates access leases • Replication is automatic and self repairing • Synchronously for atomicity
GFS – Google File System • Reading is fast (parallelizable) • But requires a lease • Master server is required for all reads and writes
MogileFS – OMG Files • Developed by Danga / SixApart • Open source • Designed for scalable web app storage
MogileFS – OMG Files • Single metadata store (MySQL) • MySQL Cluster avoids SPF • Multiple ‘tracker’ nodes locate files • Multiple ‘storage’ nodes store files
MogileFS – OMG Files Tracker MySQL Tracker
MogileFS – OMG Files • Replication of file ‘classes’ happens transparently • Storage nodes are not mirrored – replication is piecemeal • Reading and writing go through trackers, but are performed directly upon storage nodes
Flickr File System • Developed by Flickr • Proprietary • Designed for very large scalable web app storage
Flickr File System • No metadata store • Deal with it yourself • Multiple ‘StorageMaster’ nodes • Multiple storage nodes with virtual volumes
Flickr File System SM SM SM
Flickr File System • Metadata stored by app • Just a virtual volume number • App chooses a path • Virtual nodes are mirrored • Locally and remotely • Reading is done directly from nodes
Flickr File System • StorageMaster nodes only used for write operations • Reading and writing can scale separately
Amazon S3 • A big disk in the sky • Multiple ‘buckets’ • Files have user-defined keys • Data + metadata
Amazon S3 Servers Amazon