490 likes | 508 Views
Distributed Systems CS 15-440. Distributed File Systems Lecture 22, Dec 2, 2015 Mohammad Hammoud. Today…. Last Session: Fault Tolerance- Part II Today’s Session: Distributed File Systems – Part I Announcements: P3 grades are out P4 is due tomorrow by midnight
E N D
Distributed SystemsCS 15-440 Distributed File Systems Lecture 22, Dec 2, 2015 Mohammad Hammoud
Today… • Last Session: • Fault Tolerance- Part II • Today’s Session: • Distributed File Systems – Part I • Announcements: • P3 grades are out • P4 is due tomorrow by midnight • Final exam is on Dec 10th from 1:00PM to 4:00PM at Room 1064 (all topics are included; it will be open books, open notes) • We will conduct an overview for the final exam tomorrow during the recitation
Intended Learning Outcomes: Distributed File Systems Considered: a reasonably critical and comprehensive perspective. Thoughtful: Fluent, flexible and efficient perspective. Masterful: a powerful and illuminating perspective. ILO7 ILO7.2 ILO7.1
Distributed File Systems • Why File Systems? • To organize data (as files) • To provide a means for applications to store, access, and modify data • Why “Distributed” File Systems (DFSs)? • To share data across a cluster of machines • To store large-scale datasets • To provide transparency and ease of management
NAS versus SAN • Another term for DFS is network attached storage (NAS), referring to attaching storage to network servers that provide file systems • A similar sounding term that refers to a very different approach is storage area network (SAN) • SAN makes storage devices (not file systems) available over a network Client Computer File Server (Providing NAS) Client Computer LAN SAN Client Computer Database Server Client Computer
Architectures • Client-Server Distributed File Systems • Cluster-Based Distributed File Systems • Symmetric Distributed File Systems
Architectures • Client-Server Distributed File Systems • Cluster-Based Distributed File Systems • Symmetric Distributed File Systems
Network File System • Many distributed file systems are organized along the lines of client-server architectures • Sun Microsystem’s Network File System (NFS) is one of the most widely-deployed DFS for Unix-based systems • NFS comes with a protocol that describes precisely how a client can access a file stored on a (remote) NFS file server • NFS allows a heterogeneous collection of processes, possibly running on different OSs and machines, to share a common file system
Remote Access Model • The model underlying NFS and similar systems is referred to as remote access model • In this model, clients: • Are offered transparent accesses to a file system that is managed by a remote server • Are normally unaware of the actual location(s) of files • Are offered an interface to a file system similar to the interface offered by a conventional local file system Replies from the server Server Client File Requests from a client to access a remote file The file remains at the server
Upload/Download Model • A contrary model, referred to as upload/download model,allows a client to access a file locally after having downloaded it from the server • An Example: The Internet’s FTP service The file is moved to the client’s side Server Client File New File All accesses are done on the client’s side When the client is done, the file is returned to the server
The Basic NFS Architecture Server Client System call layer System call layer Virtual File System (VFS) layer Virtual File System (VFS) layer Local file system interface NFS client NFS server Local file system interface RPC client stub RPC server stub A Client Request in NFS Network How is Naming Handled?
Structured Naming in NFS Client B Client A Server work usr remote usr users usr Mount steen subdirectory Mount steen subdirectory bin bin steen mbox mbox mbox Exported directory mounted by Client B Exported directory mounted by Client A The same file is now shared! The file is named /usr/bin/mbox at Client A The file is named /usr/bin/mbox at Client B
Architectures • Client-Server Distributed File Systems • Cluster-Based Distributed File Systems • Symmetric Distributed File Systems
Data-Intensive Applications • Today there is a deluge of large data-intensive applications • Most data-intensive applications fall into one of two styles of computing: • Internet services (or cloud computing) • High-performance computing (HPC) • Cloud computing and HPC applications: • Run typically on hundreds/thousands of compute nodes • Process sheer volumes of data (or Big Data) Visualization of entropy in Terascale Supernova Initiative application. Image from Kwan-Liu Ma’s visualization team at UC Davis
Cluster-Based Distributed File Systems • Cluster-based file systems: • Are key for providing scalable data-intensive application performance • Typically partition and distribute large-scale datasets using file striping techniques • Could be viewed as Cloud-Computing- or HPC-oriented • Examples: • Cloud-Computing-Oriented: Google File System (GFS) • HPC-Oriented: Parallel Virtual File System (PVFS)
File Striping Techniques • Server clusters are often used for distributed applications and their associated file systems are adjusted to satisfy their requirements • One well-known technique is to deploy file-striping techniques, by which a single file is distributed across multiple servers • Hence, it becomes possible to fetch different parts concurrently Accessing file parts in parallel a e a b d a c b e c b d c e d
Round-Robin Distribution (1) • How to stripe a file over multiple machines? • Round-Robin is typically a reasonable default solution Logical File 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Stripe Size Striping Unit Server 1 Server 2 Server 3 Server 4 0 4 8 12 1 5 9 13 2 6 10 14 3 7 11 15
Round-Robin Distribution (2) • Clients perform writes/reads of file at various regions Client I: 512K write, offset 0 Client II: 512K write, offset 512 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 4 1 5 2 6 3 7 8 12 9 13 10 14 11 15 Server 1 Server 2 Server 3 Server 4 0 4 8 12 1 5 9 13 2 6 10 14 3 7 11 15
2D Round-Robin Distribution (1) • What happens when we have many servers (say 1000s)? • 2D distribution can help Logical File 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Stripe Size Striping Unit Server 1 Server 2 Server 3 Server 4 0 2 4 6 1 3 5 7 8 14 9 11 13 15 10 12 Group Size = 2
2D Round-Robin Distribution (2) • 2D distribution can limit the number of servers per client Client I: 512K write, offset 0 Client II: 512K write, offset 512 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 4 1 5 2 6 3 7 8 12 9 13 10 14 11 15 Server 1 Server 2 Server 3 Server 4 0 2 4 6 1 3 5 7 8 14 9 11 13 15 10 12 Group Size = 2
GFS Data Distribution Policy • The Google File System (GFS) is a scalable DFS for data-intensive applications • GFS partitions large files into multiple pieces called chunks or blocks, and stores them on different data servers • This design is referred to as block-based design • Each GFS chunk has a unique 64-bit identifier and is stored as a file through a local file system at a data server • GFS distributes chunks across cluster data servers using a random distribution policy
GFS Distribution and Replication Policies Server 2 Server 3 Server 1 Server 0 (Writer) Blk 0 Blk 0 Blk 1 Blk 0 0M Blk 1 Blk 2 Blk 2 Blk 1 64M Blk 2 Blk 3 Blk 4 Blk 4 128M Blk 3 Blk 3 Blk 6 192M Blk 4 Blk 5 256M Blk 5 Blk 5 320M Blk 6 Blk 6 384M Each block is “replicated” 3 times by default!
GFS Distribution and Replication Policies Server 2 Server 3 Server 1 Server 0 (Writer) Blk 0 Blk 0 Blk 1 Blk 0 0M Blk 1 Blk 2 Blk 2 Blk 1 64M Blk 2 Blk 3 Blk 4 Blk 4 128M Blk 3 Blk 3 Blk 6 192M Blk 4 Blk 5 256M Blk 5 Blk 5 320M Blk 6 Blk 6 384M Load Imbalance
GFS Architecture • The storage and compute capabilities of a cluster are usually organized in two ways: • Co-locate storage and compute in the same node • Separate storage nodes from compute nodes GFS File name, chunk index GFS client Master Structured Naming Through the Master Contact address Chunk Id, range Chunk Server Chunk Server Chunk Server Chunk data Linux File System Linux File System Linux File System
PVFS Data Distribution Policy • Parallel Virtual File System (PVFS) is a scalable DFS for (scientific) data-intensive applications • PVFS divides large files into multiple pieces called stripe units (by default 64KB) and stores them on different data servers • This design is referred to as object-based design • Unlike the block-based design of GFS, PVFS stores an object (or a handle) as a file that includes all the stripe units at a data server • PVFS distributes stripe units across cluster data servers using a round-robin policy
PVFS Distribution and Replication Policies Server 2 Server 3 Server 1 Server 0 (Writer) 0M Blk 0 Blk 0 Blk 0 Blk 1 64M Blk 1 Blk 1 Blk 2 Blk 2 128M Blk 2 Blk 3 Blk 3 Blk 3 192M Blk 4 Blk 4 Blk 4 Blk 5 256M Blk 6 Blk 5 Blk 5 Blk 6 320M Blk 6 384M Blocks are also replicated for “performance” and “fault-tolerance” reasons!
PVFS Distribution and Replication Policies Server 2 Server 3 Server 1 Server 0 (Writer) 0M Blk 0 Blk 0 Blk 0 Blk 1 64M Blk 1 Blk 1 Blk 2 Blk 2 128M Blk 2 Blk 3 Blk 3 Blk 3 192M Blk 4 Blk 4 Blk 4 Blk 5 256M Blk 6 Blk 5 Blk 5 Blk 6 320M Blk 6 384M Load Balance
PVFS Architecture • The storage and compute capabilities of a cluster are organized in two ways: • Co-locate storage and compute in the same node • Separate storage nodes from compute nodes Provides Naming Service PVFS Metadata Manager Network Compute Nodes I/O Nodes
Architectures • Client-Server Distributed File Systems • Cluster-Based Distributed File Systems • Symmetric Distributed File Systems
Ivy • Fully symmetric organizations that are based on the peer-to-peer technology also exist • Most current proposals use a DHT-based system for distributing data, combined with a key-based lookup mechanism • As an example, Ivy is a distributed file system that is built using the Chord DHT-based system • Data storage in Ivy is realized through a block-oriented distributed storage called DHash
Ivy Architecture • Ivy consists of 3 separate layers: Node where a file system is rooted File System Layer Ivy Ivy Ivy Block-Oriented Storage DHash DHash DHash DHT Layer Chord Chord Chord Naming is provided through Chord!
Ivy • Ivy implements an NFS-like semantics • To increase “availability” and improve “performance”, Ivy: • Replicates every block B to the k immediate successors of the server responsible for storing B • Caches looked up blocks along the route that the lookup request followed
Stateless Processes • Cooperating processes in DFSs are usually the storage servers and file manager(s) • The most important aspect concerning DFS processes is whether they should be stateless or stateful • Stateless Processes: • Do not require that servers maintain any client state • After a server crashes, there is no need to enter a recovery phase to bring the server to a previous state • Locking a file cannot be easily done • Example: NFSv3
Stateful Processes • Stateful Processes: • Require that a server maintains some client state • Clients can make effective use of caches, but this would require a cache consistency protocol • Provide servers with an ability to support callbacks (i.e., the ability to do RPC to a client) in order to keep track of their clients • Example: NFSv4
Communication in DFSs • Communication in DFSs is mainly based on remote procedure calls (RPCs) • The main reason for choosing RPC is to make the system independent from underlying OSs, networks, and transport protocols • GFS uses RPC and may break a read into multiple RPCs to increase parallelism • PVFS currently uses TCP for all its internal communication • In NFS, all communication between a client and server proceeds along the Open Network Computing RPC (ONC RPC)
RPCs in NFS Client Server Lookup • Up until NFSv4, the client was made responsible for making the server’s life as easy as possible by keeping requests simple • The drawback becomes apparent when considering the use of NFS in a wide-area system • In that case, the extra latency of a second RPCleads to performance degradation • To circumvent such a problem, NFSv4 supportscompound procedures Lookup name Read Time Read file data Client Server Lookup Open Read Lookup name Open file Time Read file data
Unix Semantics In Single Processor Systems • Synchronization for file systems would not be an issue if files were not shared • When two or more users share the same file at the same time, it is necessary to define the semantics of reading and writing • In single processor systems, a read operation after a write will return the value just written • Such a model is referred to as Unix Semantics Single Machine Original File a b Process A Write “c” a b c Process B Read gets “abc”
Unix Semantics In DFSs • In a DFS, Unix semantics can be achieved easily if there is only one file server and clients do not cache files • Hence, all reads and writes go directly to the file server, which processes them strictly sequentially • This approach provides UNIX semantics, however, performance might degrade as all file requests must go to a single server
Caching and Unix Semantics • The performance of a DFS with one single file server and Unix semantics can be improved by caching • If a client, however, locally modifies a cache file and shortly another client reads the file from the server, it will get an obsolete file Client Machine #2 Client Machine #1 File Server 1. Read “ab” 3. Read gets “ab” a b a b a b Process B Process A 2. Write “c” a b c
Session Semantics (1) • One way out of getting an obsolete file is to propagate all changes to cached files back to the server immediately • Implementing such an approach is combersome • An alternative solution is to relax the semantics of file sharing Session Semantics Changes to an open file are initially visible only to the process that modified the file. Only when the file is closed, the changes are made visible to other processes.
Session Semantics (2) • Using session semantics raises the question of what happens if two or more clients are simultaneously caching and modifying the same file • One solution is to say that as each file is closed in turn, its value is sent back to the server • The final result depends on whose close request is most recently processed by the server • A less pleasant solution, but easier to implement, is to say that the final result is one of the candidates and leave the choice of the candidate unspecified
Immutable Semantics • A different approach to the semantics of file sharing in DFSs is to make all files immutable • With immutable semantics there is no way to open a file for writing • What is possible is to create an entirely new file • Hence, the problem of how to deal with two processes, one writing and the other reading, just disappears
Atomic Transactions • A different approach to the semantics of file sharing in DFSs is to use atomic transactions whereall changes occur atomically • A key property is that all calls contained in a transaction will be carried out in-order 1 A process first executes some type of BEGIN_TRANSACTION primitive to signal that what follows must be executed indivisibly 2 Then come system calls to read and write one or more files 3 When done, an END_TRANSACTION primitive is executed
Semantics of File Sharing: Summary • There are four ways of dealing with the shared files in a DFS:
Next Class Overview