780 likes | 1.04k Views
Network File Systems. NFS AFS SMB. NFS General. http://en.wikipedia.org/wiki/Network_file_system. Network File System (NFS). Any computer file system that supports sharing of files and other resources as persistent storage over a computer network
E N D
Network File Systems NFS AFS SMB
NFS General http://en.wikipedia.org/wiki/Network_file_system
Network File System(NFS) • Any computer file system that supports sharing of files and other resources as persistent storage over a computer network • First file servers were developed in the 1970s, • In 1985 Sun Microsystems created the Network File System (NFS) • Became the first widely used distributed file system • Other notable distributed file systems • Andrew File System (AFS) • Server Message Block SMB • AKA CIFS
Distribution • Distributed File System (DFS) • Network file system with: • Clients, servers, and storage devices • Spread throughout the machines of a distributed system or intranet • Service activity occurs across the network • System has multiple and independent storage devices • Some DFS servers run on dedicated machines • Others a machine can be both a server and a client • A DFS can be implemented: • As part of a distributed operating system • By a software layer • Managing the communication between conventional operating systems and file systems
Distribution • Distinctive feature of a DFS: • System with many and autonomous clients and servers
Transparency • A DFS should appear to its users to be a conventional, centralized file system • Multiplicity and dispersion of its servers and storage devices should be invisible • Client interface used by programs should not distinguish between local and remote files • The DFS locates the files and arranges the transport of data
Performance • Most important performance measurement of a DFS: • Amount of time needed to satisfy service requests • Typical performance constraints: • Conventional systems: • Disk-access time • Small amount of CPU-processing time • DFS: • Additional overhead due to the distributed structure • Includes: • Time to deliver the request to a server • Time to deliver the response to the client • For each direction, a CPU overhead of running the communication protocol software
Performance • The performance of a DFS can be viewed as one dimension of its transparency • Ideally, it would be comparable to that of a conventional file system
Concurrent file updates • A DFS should allow multiple client processes on multiple machines to access and update the same files • Updates to the file from one client should not interfere with access and updates from other clients • Concurrency control or locking may be: • Built into the file system • Provided by an add-on protocol
Sample Network File Systems • Client-server file systems • 9P • Amazon S3 • Coda • Secure File System (SFS) • Self-certifying File System (SFS)
Sample Network File Systems • Distributed file systems • Ceph • Andrew File System (AFS), OpenAFS • DCE Distributed File System • Remote file access protocol used with the Distributed Computing Environment • Google File System • File system that is optimized for Google's core storage needs • Hadoop • Java software framework that supports distributed applications running on large clusters of commodity computers • Kosmos FS • High performance distributed file system. It is also integrated with hadoop. • Lustre • Distributed file system for large scale cluster computing • Microsoft Distributed File System • Set of client and server services for organizing many distributed SMB file shares into a distributed file system • GlusterFS • Platform for developing clustering applications • Parallel Virtual File System • Parallel file system that focuses on high performance access to large data sets
NFS Protocol http://en.wikipedia.org/wiki/Network_File_System_%28protocol%29
Network File SystemProtocol • Network file system protocol: • Allows a user on a client computer to access files over a network • As if the network devices were attached to its local disks • Originally developed by Sun Microsystems in 1984 • NFS builds on the Open Network Computing Remote Procedure Call (ONC RPC) system • The Network File System protocol is specified in • RFC 1094, RFC 1813, and RFC 3530 (which obsoletes RFC 3010)
Versions and variations • Version 1 • Sun used only for in-house experimental purposes • Did not release it to the public
Versions and variations • Version 2 of the protocol • Originally operated entirely over UDP • Defined in RFC 1094 • March 1989 • Designers meant to keep the protocol stateless • Locking (for example) implemented outside of the core protocol
Versions and variations • Version 3 (RFC 1813, June 1995) added: • Support for 64-bit file sizes and offsets • Handle files larger than 4 gigabytes (GB) • Support for asynchronous writes on the server • Improve write performance • Additional file attributes in many replies • Avoid the need to re-fetch them • READDIRPLUS operation • Get file handles and attributes along with file names when scanning a directory • Assorted other improvements
Versions and variations • At the introduction of Version 3 • Vendor support for TCP as a transport-layer protocol began increasing • Several vendors had already added support for NFS Version 2 with TCP as a transport • Sun Microsystems added support for TCP as a transport for NFS at the same time it added support for Version 3 • Using TCP as a transport made using NFS over a WAN more feasible
Versions and variations • Version 4 became the first version developed with the IETF • After Sun Microsystems handed over the development of the NFS protocols
Versions and variations • Version 4 includes: • Performance improvements • Mandates strong security • Introduces a stateful protocol • Defined by • RFC 3010, December 2000 • Revised in RFC 3530, April 2003
Versions and variations • Various side-band protocols have become associated with NFS, including: • The byte-range advisory Network Lock Manager (NLM) protocol • Added to support UNIX System V file-locking APIs • The remote quota reporting (RQUOTAD) protocol • Allow NFS-users to view their data-storage quotas on NFS servers
Versions and variations • WebNFS • Allows NFS to • Integrate more easily into Web-browsers • Enable operation through firewalls • An extension to Version 2 and Version 3 • Sun Microsystems open sourced their WebNFS implementation • via https://yanfs.dev.java.net/
Platforms • NFS most commonly used with UNIX systems • Other software platforms such can also use the protocol • Mac OS, Microsoft Windows, Novell NetWare, and IBM AS/400
Platforms • Alternative remote file access protocols include • Server Message Block (SMB, also known as CIFS) protocol • Apple Filing Protocol (AFP) • NetWare Core Protocol (NCP) • OS/400 File Server file system (QFileSvr.400) • SMB and NetWare Core Protocol (NCP) are most common on Microsoft Windows • AFP is most common on Macintosh systems • QFileSvr.400 is most common on AS/400 systems
Typical implementation • Assume a Unix-style scenario: one machine (client) requires access to data stored on another machine (NFS server) • Server implements NFS daemon processes (running by default as nfsd) in order to make its data generically available to clients • Server administrator determines what to make available • Exporting the names and parameters of directories • Typically using the /etc/exports configuration file and the exportfs command • Server security-administration recognizes and approves validated clients • Server network configuration lets appropriate clients can negotiate with it through any firewall system • Client machine requests access to exported data • Issue a mount command • Users on the client machine then view and interact with mounted filesystems on the server within the parameters permitted • Note: automating the NFS mounting process is typical • E.g. using /etc/fstab and/or automounting facilities
Protocol development versus competing protocols • 1980s • NFS and ONC figured prominently in the network-computing war between Sun Microsystems and Apollo Computer, and later the UNIX wars (ca 1987-1996) between AT&T and Sun on one side, and Digital Equipment, HP, and IBM on the other. • During the development of the ONC protocol (called SunRPC at the time), only Apollo's Network Computing System (NCS) offered comparable functionality. Two competing groups developed over fundamental differences in the two remote procedure call systems. Arguments focused on the method for data-encoding — ONC's External Data Representation (XDR) always rendered integers in big-endian order, even if both peers of the connection had little-endian machine-architectures, whereas NCS's method attempted to avoid byte-swap whenever two peers shared a common endianness in their machine-architectures. An industry-group called the Network Computing Forum formed (March 1987) in an (ultimately unsuccessful) attempt to reconcile the two network-computing environments. • Later, Sun and AT&T announced that the two firms would jointly develop AT&T's next version of UNIX: System V Release 4. This caused many of AT&T's other licensees of UNIX System V to become concerned that this would put Sun in an advantaged position, and it ultimately led to Digital Equipment, HP, IBM, and others forming the Open Software Foundation (OSF) in 1988. Ironically, Sun and AT&T had previously competed over Sun's NFS versus AT&T's Remote File System (RFS), and the quick adoption of NFS over RFS by Digital Equipment, HP, IBM, and many other computer vendors tipped the majority of users in favor of NFS. • OSF solicited the proposals for various technologies, including the remote procedure call (RPC) system and the remote file access protocol. In the end, proposals for these two requirements, called respectively, the Distributed Computing Environment (DCE), and the Distributed File System (DFS) won over Sun's proposed ONC and NFS. DCE derived from a suite of technologies, including NCS and Kerberos. DFS used DCE as the RPC and derived from AFS.
Protocol development versus competing protocols • 1990s • Sun Microsystems and the Internet Society (ISOC) reached an agreement to cede "change control" of ONC RPC so that ISOC's engineering-standards body, the Internet Engineering Task Force (IETF), could publish standards documents (RFCs) documenting the ONC RPC protocols and could extend ONC RPC. OSF attempted to make DCE RPC an IETF standard, but ultimately proved unwilling to give up change-control. Later, the IETF chose to extend ONC RPC by adding a new authentication flavor, RPCSEC GSS, in order to meet IETF's requirements that protocol standards have adequate security. • Later, Sun and ISOC reached a similar agreement to give ISOC change control over NFS, although writing the contract carefully to exclude NFS version 2 and version 3. Instead, ISOC gained the right to add new versions to the NFS protocol, which resulted in IETF specifying NFS version 4 in 2003. • 2000s • By the 21st century, neither DFS nor AFS had achieved any major commercial success as compared to CIFS or NFS. IBM, which had previously acquired the primary commercial vendor of DFS and AFS, Transarc, donated most of the AFS source code to the free software community in 2000. The OpenAFS project lives on. In early 2005, IBM announced end of sales for AFS and DFS.
Present • NFSv4.1 presents new opportunities for the NAS community • Primary addition is Parallel NFS (pNFS) • A method of introducing data access parallelism • Protocol defines a method of separating the meta-data (names and attributes) of a filesystem from the location of the file data • Goes beyond simple name/data separation • Defines method of striping the data amongst a set of data servers • Very different from the traditional NFS server • Holds the names of files and their data under the single umbrella of the server • Products exist that are multi-node NFS servers • Limited participation by the client in separation of meta-data and data • NFSv4.1 client can be enabled to be a direct participant in the exact location of file data • Avoid sole interaction with the single NFS server when moving data
Present • NFSv4.1 pNFS server: • A collection or community of server resources or components • Community members are controlled by the meta-data server • pNFS client still accesses a single meta-data server for traversal or interaction with the namespace • Client moves data to and from the server • Directly interact with the set of data servers belonging to the pNFS server community
Present • NFSv4.1 provides: • Sessions • Directory Delegation and Notifications • Multi-server Namespace • ACL/SACL/DACL • Retention Attributions • SECINFO_NO_NAME
AFS http://en.wikipedia.org/wiki/Andrew_File_System
Andrew File System(AFS) • Distributed networked file system • Developed by Carnegie Mellon University • Part of the Andrew Project • Named for • Andrew Carnegie • Andrew Mellon • Primary used in distributed computing
Features • AFS has several benefits over traditional networked file systems • Particularly in the areas of security and scalability • Not uncommon for enterprise AFS cells to exceed fifty thousand clients • Uses Kerberos for authentication • Implements access control lists on directories for users and groups • Each client caches files on the local filesystem for increased speed on subsequent requests for the same file • Allows limited filesystem access in the event of a server crash or a network outage
Features • Read and write operations on an open file are directed only to the locally cached copy • When a modified file is closed • Only the changed portions copied back to the file server • Cache consistency maintained by callback • When a file is cached • Server makes a note of this • Will inform the client if the file is updated by someone else • After any client, server, or network failure (including a time-out) • Callbacks are discarded and must be re-established • Re-establishing a callback involves: • Status check • Does not require re-reading the file itself
Features • Consequence of the whole file locking strategy • AFS does not support large shared databases or record updating within files shared between client systems • Deliberate design decision • Based on the perceived needs of the university computing environment • It leads to the use of a single file per message in the original email system for the Andrew Project, the Andrew Message System, rather than the more customary single file per mailbox
Features • A significant feature of AFS is the volume • Tree of files • Sub-directories • AFS mountpoints • Links to other AFS volumes • Volumes are created by administrators • Linked at a specific named path in an AFS cell • Once created, users of the filesystem may create directories and files as usual without concern for the physical location of the volume • A volume may have a quota assigned to it in order to limit the amount of space consumed. • As needed, AFS administrators can move that volume to another server and disk location without the need to notify users • Moving can occur while files in that volume are even being used
Features • AFS volumes can be replicated to read-only cloned copies • When accessing files in a read-only volume, a client system will retrieve data from a particular read-only copy • If a copy becomes unavailable, clients will look for any remaining copies • Users of that data are unaware of the location of the read-only copy • Administrators can create and relocate copies as needed • AFS command suite guarantees that all read-only volumes contain exact copies of the original read-write volume at the time the read-only copy was created
Features • The file name space on an Andrew workstation is partitioned into a shared and local name space • Shared name space identical on all workstations • Usually mounted as /afs on the Unix filesystem • Local name space unique to each workstation • Contains only temporary files needed for workstation initialization and symbolic links to files in the shared name space
Features • Andrew File System • Heavily influenced Version 4 of Sun Microsystems' NFS • Variant of AFS adopted by the Open Software Foundation • Distributed File System (DFS) • Part of their Distributed computing environment
Implementations • There are three major implementations • Transarc (IBM) • OpenAFS • Arla • Transarc software is losing support and is deprecated • AFS (version two) is also the predecessor of the Coda file system. • A fourth implementation exists in the Linux kernel source code since at least version 2.6.10 • Committed by Red Hat • A fairly simple implementation • Still in its early stages of development • Incomplete