1 / 53

Lecture 06 Distributed File Systems

Lecture 06 Distributed File Systems. §6.1 Architecture §6.2 Process and Communication §6.3 Naming §6.4 Synchronization §6.5 Consistency and Replication §6.6 Fault Tolerance and Security. What’s Distributed File System?. A file system distributed across multiple nodes

Download Presentation

Lecture 06 Distributed File Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 06 Distributed File Systems §6.1 Architecture §6.2 Process and Communication §6.3 Naming §6.4 Synchronization §6.5 Consistency and Replication §6.6 Fault Tolerance and Security

  2. What’s Distributed File System? • A file system distributed across multiple nodes • To share data among processes • Persistent, secure, and reliable • Access using file system interfaces • A fundamental component of distributed systems • Typical systems: • NFS, AFS, etc.: earlier, typically with a single server • HDFS, GFS: newer, used in data centers • NFS, Network File System • UNIX-based, by Sun

  3. §6.1 Architecture • Client-Server • Cluster based Client-Server • Symmetric, fully decentralized

  4. Client-Server Architecture

  5. Client-Server Architecture

  6. Client-Server Architecture

  7. File Operations of NFS

  8. Cluster based Client-Server Architecture • Basically client-server • With a cluster of file servers • For large scale, parallel applications • Two further types • HPC cluster • Data center cluster

  9. DFS of HPC Clusters • Small to middle size • File striping to realize parallel access b2 b1 b3 a1 a3 a2 c1 c2 c3

  10. DFS of Data Center Clusters • Very large scale: tens of thousands PCs • Large file of small data pieces, usually appending • Frequent faults a1 a1 a2 a3

  11. Symmetric Architecture • P2P , high availability and scalability • Ivy as an example

  12. §6.2 Process and Communication • NFS: client and server processes • One of key issue in design: stateness • From stateless (v3) to stateful (v4) • Stateless • Simple, especially in recovery • Less powerful, e.g. access authentication, locking a file are not easy • Stateful • Complex • More powerful: supporting WAN, efficient caching, callback to client, etc.

  13. Process Communication in NFS • Open Network Computing RPC (ONC RPC) • To make NFS independent from underlying OS, networks, and transport protocols. • Every NFS operation can be implemented as a single RPC to a file server. • Client is responsible for organize complex operations • To make the sever simple

  14. Remote Procedure Calls in NFS Figure 11-7. (a) Reading data from a file in NFS version 3. (b) Reading data using a compound procedure in version 4.

  15. The RPC2 Subsystem in Coda • Figure 11-8. Side effects in Coda’s RPC2 system.

  16. The RPC2 Subsystem in Coda • Figure 11-9. Local copy invalidation via multicasting

  17. §6.3 Naming • Naming in NFS • Name space is local Figure 11-11. Mounting (part of) a remote file system in NFS.

  18. Naming in NFS • Pros: • Transparency, simplicity • Cons: • Hard to share files among clients • Names are locally valid • Solution: standardize the major directories

  19. Naming in NFS Figure 11-12. Mounting nested directories from multiple servers in NFS.

  20. Automounting • To mount directories transparently • Automounter will cause performance degradation

  21. Automounting via Symbolic Linking • To limit the use of automounter in a special directory

  22. Naming in Global Name Space • To share files with a globally unified name space • Via ftp protocol • not convenient for using • Dedicatedly designed wide-area file system • Modifications to OS kernel • too costly, constrained • Global Name Space Service (GNS) • To integrate existing file systems into a single, global name space • Using only user-level solutions

  23. GNS • To merge name space only • Not to provide interfaces to access files • Basic idea • a GNS client maintains a virtual tree, where • each node is either a directory or a junction. • a junction is a special node that indicates that name resolution is to be taken over by another process.

  24. GNS • Five types of junctions in GNS

  25. §6.4 Synchronization • To control concurrent accessing to shared files • What results are “correct”? – semantics • How to realize? – usually based on “locking”

  26. Semantics of File Sharing

  27. The UNIX semantics • For single machine • Absolute time ordering • “Read the last write”

  28. Semantics for Distributed Systems • In distributed systems • Multiple machines • No global time • Single file server • Easy to achieve UNIX semantics • Multiple servers, or • Single server wt client caches • Reading old values may occur

  29. Session Semantics • Changes to an open file are initially visible only to the process (or possibly machine) that modified the file. • Only when the file is closed are the changes made visible to other processes (or machines). • Adopted by most DFSs • What happens when two nodes cache and modify the same file simultaneously? • Determine based on the “close” time, or • Leave undefined

  30. Immutable Files • A special approach • All files are immutable • No way to open a file for writing • Only operations on files are “create” and “read” • Update a file by “replacing” it • Then, no simultaneous writing at all • How about “simultaneous” replacing? • Handled similar as in session semantics • How about to replace a file being read? • Allow reading to continue, or don’t allow.

  31. Transaction based File Sharing • File operations are included in transactions • Atomic and consistent

  32. File Locking • A central lock manager, in general • Locking is complex, to allow concurrent accessing • Different locks exist, and • Different granularities used • NFS v4 as an example

  33. File Locking in NFS v4 • Conceptually simple • If a “lock” fails due to confliction, an error message is returned (nonblocking),and then • The client tries later (simple) or • Request the server to put its request on a FIFO list, and poll the server in time (fairness guaranteed)

  34. File Share Reservation in NFS • An implicit way to lock a file • Completely independent from locking management • Specifying the type of access upon opening a file

  35. §6.5 Consistency and Replication • Cache, client side • Replication, server side • NFS as the example

  36. Client-side Caching in NFS • v3: not considered in NFS protocol itself • Handled in an implementation-dependent way • Can be stale for a few seconds or even 30 seconds • No consistency guaranteed

  37. Client-side Caching in NFS • v4: considered caching • Still in an implementation-dependent way • Session semantics specified • When a client opens a file and caches the data it obtains from the server as the result of various read operations. • Write operations can be carried out in the cache as well. • Flushing cache back to the server upon closing the file • Clients at the same machine can share cache • Cache can be kept after file close • Need to be revalidated after re-opening

  38. Open Delegation • A server may delegate some of its rights to a client when a file is opened for writing • The client machine is allowed to locally handle open and close operations from other clients on the same machine • file locking requests can be handled locally. • The server will still handle locking requests from clients on other machines, by simply denying those clients access to the file.

  39. Open Delegation • The server needs to be able to recall the delegation • Recalling a delegation is done by a callback to the client

  40. Server-side Replication • Replicating files near the sever or across different servers • Less popular than caching • Mostly for fault tolerance rather than performance (Geo-replication across data centers are popular)

  41. §6.6 Fault Tolerance and Security • Replication over fault-tolerant server groups is the most popular technique • Also some other special issues, e.g. Byzantine failures, are handled specially.

  42. Security in Distributed File Systems • Centralized, usually to have the servers handle authentication and access control • As in NFS • Authentication by a separate service, such as Kerberos • Authorization (access control) by the file servers • Key drawback: poor scalability • Decentralized, good scalability

  43. Security in NFS • Mainly by secure communications (RPCs).

  44. Authentication Ways in NFS • System authentication • A client simply passes its user ID and group ID to the server, in plaintext • No authentication in fact. • Diffie-Hellman key exchange • a public-key cryptosystem to establish a session key • in old NFS • Kerberos • Better and popular

  45. Kerberos based Secure RPC in NFS • RPCSEC_GSS: a general security framework that can support different security mechanisms for setting up secure channels • LIPKEY is a public-key system that allows clients to be authenticated using a password while servers can be authenticated using a public key.

  46. Kerberos • Basic idea: shared secrets • User prove to KDC who he is, by log on; • KDC generates shared secret between client and file server KDC ticket server generates S T “Need to access fs” file server Kclient[S] Kfs[S] client S: specific to {client, fs} pair; “short-term session-key”; has expiration time (e.g. 8 hours);

  47. Kerberos Interactions KDC “Need to access fs” ticket server generates S 1. T Kclient[S], ticket = Kfs[ use S for client] client ticket=Kfs[use S for client], S[client, time] 2. S{time} client file server • why “time”: guard against replay attack • mutual authentication • File server doesn’t store S, which is specific to {client, fs} • Client doesn’t contact “ticket server” every time it contacts fs

  48. Access Control (Authorization) in NFS • ACL file attribute • A list of access control entries • Each entry specifies the access rights for a specific user or group, e.g.

  49. Access Control in NFS • Figure 11-30. The various kinds of users and processes distinguished by NFS with respect to access control.

  50. Decentralized Authentication • Secure File Systems (SFS) in combination with decentralized authentication servers • The basic idea is simple • Allow a a user to specify that a remote user has certain privileges on his files • For example, Alice can specify that "Bob, whose details can be found at X," has certain privileges. • The authentication server that handles Alice's credentials could then contact server X to get information on Bob. • Users must be globally known to all authentication servers.

More Related