550 likes | 827 Views
Lecture 06 Distributed File Systems. §6.1 Architecture §6.2 Process and Communication §6.3 Naming §6.4 Synchronization §6.5 Consistency and Replication §6.6 Fault Tolerance and Security. What’s Distributed File System?. A file system distributed across multiple nodes
E N D
Lecture 06 Distributed File Systems §6.1 Architecture §6.2 Process and Communication §6.3 Naming §6.4 Synchronization §6.5 Consistency and Replication §6.6 Fault Tolerance and Security
What’s Distributed File System? • A file system distributed across multiple nodes • To share data among processes • Persistent, secure, and reliable • Access using file system interfaces • A fundamental component of distributed systems • Typical systems: • NFS, AFS, etc.: earlier, typically with a single server • HDFS, GFS: newer, used in data centers • NFS, Network File System • UNIX-based, by Sun
§6.1 Architecture • Client-Server • Cluster based Client-Server • Symmetric, fully decentralized
Cluster based Client-Server Architecture • Basically client-server • With a cluster of file servers • For large scale, parallel applications • Two further types • HPC cluster • Data center cluster
DFS of HPC Clusters • Small to middle size • File striping to realize parallel access b2 b1 b3 a1 a3 a2 c1 c2 c3
DFS of Data Center Clusters • Very large scale: tens of thousands PCs • Large file of small data pieces, usually appending • Frequent faults a1 a1 a2 a3
Symmetric Architecture • P2P , high availability and scalability • Ivy as an example
§6.2 Process and Communication • NFS: client and server processes • One of key issue in design: stateness • From stateless (v3) to stateful (v4) • Stateless • Simple, especially in recovery • Less powerful, e.g. access authentication, locking a file are not easy • Stateful • Complex • More powerful: supporting WAN, efficient caching, callback to client, etc.
Process Communication in NFS • Open Network Computing RPC (ONC RPC) • To make NFS independent from underlying OS, networks, and transport protocols. • Every NFS operation can be implemented as a single RPC to a file server. • Client is responsible for organize complex operations • To make the sever simple
Remote Procedure Calls in NFS Figure 11-7. (a) Reading data from a file in NFS version 3. (b) Reading data using a compound procedure in version 4.
The RPC2 Subsystem in Coda • Figure 11-8. Side effects in Coda’s RPC2 system.
The RPC2 Subsystem in Coda • Figure 11-9. Local copy invalidation via multicasting
§6.3 Naming • Naming in NFS • Name space is local Figure 11-11. Mounting (part of) a remote file system in NFS.
Naming in NFS • Pros: • Transparency, simplicity • Cons: • Hard to share files among clients • Names are locally valid • Solution: standardize the major directories
Naming in NFS Figure 11-12. Mounting nested directories from multiple servers in NFS.
Automounting • To mount directories transparently • Automounter will cause performance degradation
Automounting via Symbolic Linking • To limit the use of automounter in a special directory
Naming in Global Name Space • To share files with a globally unified name space • Via ftp protocol • not convenient for using • Dedicatedly designed wide-area file system • Modifications to OS kernel • too costly, constrained • Global Name Space Service (GNS) • To integrate existing file systems into a single, global name space • Using only user-level solutions
GNS • To merge name space only • Not to provide interfaces to access files • Basic idea • a GNS client maintains a virtual tree, where • each node is either a directory or a junction. • a junction is a special node that indicates that name resolution is to be taken over by another process.
GNS • Five types of junctions in GNS
§6.4 Synchronization • To control concurrent accessing to shared files • What results are “correct”? – semantics • How to realize? – usually based on “locking”
The UNIX semantics • For single machine • Absolute time ordering • “Read the last write”
Semantics for Distributed Systems • In distributed systems • Multiple machines • No global time • Single file server • Easy to achieve UNIX semantics • Multiple servers, or • Single server wt client caches • Reading old values may occur
Session Semantics • Changes to an open file are initially visible only to the process (or possibly machine) that modified the file. • Only when the file is closed are the changes made visible to other processes (or machines). • Adopted by most DFSs • What happens when two nodes cache and modify the same file simultaneously? • Determine based on the “close” time, or • Leave undefined
Immutable Files • A special approach • All files are immutable • No way to open a file for writing • Only operations on files are “create” and “read” • Update a file by “replacing” it • Then, no simultaneous writing at all • How about “simultaneous” replacing? • Handled similar as in session semantics • How about to replace a file being read? • Allow reading to continue, or don’t allow.
Transaction based File Sharing • File operations are included in transactions • Atomic and consistent
File Locking • A central lock manager, in general • Locking is complex, to allow concurrent accessing • Different locks exist, and • Different granularities used • NFS v4 as an example
File Locking in NFS v4 • Conceptually simple • If a “lock” fails due to confliction, an error message is returned (nonblocking),and then • The client tries later (simple) or • Request the server to put its request on a FIFO list, and poll the server in time (fairness guaranteed)
File Share Reservation in NFS • An implicit way to lock a file • Completely independent from locking management • Specifying the type of access upon opening a file
§6.5 Consistency and Replication • Cache, client side • Replication, server side • NFS as the example
Client-side Caching in NFS • v3: not considered in NFS protocol itself • Handled in an implementation-dependent way • Can be stale for a few seconds or even 30 seconds • No consistency guaranteed
Client-side Caching in NFS • v4: considered caching • Still in an implementation-dependent way • Session semantics specified • When a client opens a file and caches the data it obtains from the server as the result of various read operations. • Write operations can be carried out in the cache as well. • Flushing cache back to the server upon closing the file • Clients at the same machine can share cache • Cache can be kept after file close • Need to be revalidated after re-opening
Open Delegation • A server may delegate some of its rights to a client when a file is opened for writing • The client machine is allowed to locally handle open and close operations from other clients on the same machine • file locking requests can be handled locally. • The server will still handle locking requests from clients on other machines, by simply denying those clients access to the file.
Open Delegation • The server needs to be able to recall the delegation • Recalling a delegation is done by a callback to the client
Server-side Replication • Replicating files near the sever or across different servers • Less popular than caching • Mostly for fault tolerance rather than performance (Geo-replication across data centers are popular)
§6.6 Fault Tolerance and Security • Replication over fault-tolerant server groups is the most popular technique • Also some other special issues, e.g. Byzantine failures, are handled specially.
Security in Distributed File Systems • Centralized, usually to have the servers handle authentication and access control • As in NFS • Authentication by a separate service, such as Kerberos • Authorization (access control) by the file servers • Key drawback: poor scalability • Decentralized, good scalability
Security in NFS • Mainly by secure communications (RPCs).
Authentication Ways in NFS • System authentication • A client simply passes its user ID and group ID to the server, in plaintext • No authentication in fact. • Diffie-Hellman key exchange • a public-key cryptosystem to establish a session key • in old NFS • Kerberos • Better and popular
Kerberos based Secure RPC in NFS • RPCSEC_GSS: a general security framework that can support different security mechanisms for setting up secure channels • LIPKEY is a public-key system that allows clients to be authenticated using a password while servers can be authenticated using a public key.
Kerberos • Basic idea: shared secrets • User prove to KDC who he is, by log on; • KDC generates shared secret between client and file server KDC ticket server generates S T “Need to access fs” file server Kclient[S] Kfs[S] client S: specific to {client, fs} pair; “short-term session-key”; has expiration time (e.g. 8 hours);
Kerberos Interactions KDC “Need to access fs” ticket server generates S 1. T Kclient[S], ticket = Kfs[ use S for client] client ticket=Kfs[use S for client], S[client, time] 2. S{time} client file server • why “time”: guard against replay attack • mutual authentication • File server doesn’t store S, which is specific to {client, fs} • Client doesn’t contact “ticket server” every time it contacts fs
Access Control (Authorization) in NFS • ACL file attribute • A list of access control entries • Each entry specifies the access rights for a specific user or group, e.g.
Access Control in NFS • Figure 11-30. The various kinds of users and processes distinguished by NFS with respect to access control.
Decentralized Authentication • Secure File Systems (SFS) in combination with decentralized authentication servers • The basic idea is simple • Allow a a user to specify that a remote user has certain privileges on his files • For example, Alice can specify that "Bob, whose details can be found at X," has certain privileges. • The authentication server that handles Alice's credentials could then contact server X to get information on Bob. • Users must be globally known to all authentication servers.