590 likes | 831 Views
Enhancements to NFS. 王信富 R88725039 2000/11/6. Introduction. File system modules Directory module File module Access control module File access module Block module. Introduction. Distributed file system requirements Transparency Access transparency location transparency
E N D
Enhancements to NFS 王信富 R88725039 2000/11/6
Introduction • File system modules • Directory module • File module • Access control module • File access module • Block module
Introduction • Distributed file system requirements • Transparency • Access transparency • location transparency • Scaling transparency • Consistency • Security
Mobility enhancement • Mobile file system (MFS)
Mobile file system (MFS) • Client modules
Mobile file system (MFS) • Proxy modules • Source: Maria-Teresa Segarra IRISA Research Institute Campus de Beaulieu
Mobility enhancement cont. • NFS/M • Enable the mobile user to access the information regardless of • the location of the user • the state of the communication channel • the state of the data server
NFS/M modules • Cache Manager (CM) • All the file system operations to any cached objects in the local disk cache are managed by the CM • It functions only in the connected phase
NFS/M modules • Proxy Server (PS) • Emulates the functionalities of the remote NFS server by using the cached file system objects in the local disk cache • It functions in the disconnected phase
NFS/M modules • Reintegrator (RI) • Propagates the changes of the data objects in the local disk cache performed during the disconnected period back to the NFS server • Three tasks for the RI • Conflict detection • Update propagation • Conflict resolutions
NFS/M modules • Data Prefetcher (DP) • Improving data access performance • Data prefetching techniques can be classified into two categories • Informed prefetching • Predictive prefecting
Phase of the NFS/M • NFS/M client maintains an internal state which terms as phase, which is used to indicate how file system service provided under different conditions of network connectivity • Three phases: • Connected phase • Disconnected phase • Reintegration
Phase of the NFS/M • John C.S. Lui , Oldfield K.Y. So , T.S. Tam, Department of Computer Science & Engineering
Case : wireless andrew • It builds on the university’s wired network infrastructure,which currently provides 10/100Mb/s Ethernet service • To supply high-speed wireless service to the campus, lucent WaveLAN equipments have installed • For wireless access off campus or otherwise out of the range of the WaveLAN network, using cellular digital packet data
Case : wireless andrew • Wireless Andrew [mobile computing for university campus] 1999 IEEE
Reference URL • http://csep1.phy.ornl.gov/nw/node3.html • http://www.dqc.org/~chris/tcpip_ill/nfs_netw.htm • http://www.coda.cs.cmu.edu/ • http://www.uwsg.iu.edu/usail/network/nfs/overview.html • http://www.netapp.com/tech_library/nfsbook.html
Scalibility about NFS 學生:朱漢農 R88725032 2000/11/6
NFS - Scalability • AFS - Scalability • NFS Enhancement -Spritely NFS, NQNFS, WebNFS, NFS Version4 • AFS Enhancement - RAID, LSF, xFS • Frangipani
NFS - Scalability • The performance of a single server can be increased by the addition of processors, disks and controllers. • When the limits of that process are reached, additional servers must be installed and the filesystems must be reallocated between them.
NFS - Scalability (cont’d) • The effectiveness of that strategy is limited by the existence of ‘hot spot’ files. • When loads exceed the maximum performance, a distributed file system that supports replication of updatable files, or one that reduces the protocol traffic by the caching of whole files, may offer a better solution.
AFS - Scalability • The differences between AFS and NFS are attributable to the identification of scalability as the most important design goal. • The key strategy is the caching of whole file in client nodes.
AFS - Scalability (cont’d) • Whole-file serving: The entire contents of directories and files are transmitted to client computers by AFS servers. • Whole-file caching: Once a copy of a file has been transferred to a client computer, it is stored in a cache on the local disk. • The cache is permanent, surviving reboots of the client computer.
NFS enhancement - Spritely NFS • is an implementation of the NFS protocol with the addition of open and close calls. • The parameters of the Sprite open operation specify a mode and include counts of the number of local processes that currently have the file open for reading and for writing. • Spritely NFS implements a recovery protocol that interrogates a list of clients to recover the full open files table.
NFS enhancement - NQNFS • maintains similar client-related state concerning open files, but it uses leases to aid recovery after a server crash. • Callbacks are used in a similar manner to Spritely NFS to request clients to flush their caches when a write request occurs.
NFS enhancement - WebNFS • makes it possible for application programs to become clients of NFS servers anywhere in the Internet (using the NFS protocol directly) • implementing Internet applcations that share data directly, such as multi-user games or clients of large dynamics databases.
NFS enhancement - NFS version 4 • will include the features of WebNFS • the use of callback or leases to maintain consistency • on-the-fly recovery • Scalability will be improved by using proxy servers in a manner analogous to their use in the Web.
AFS enhancement • RAID • Log-structured file storage • xFS • implements a software RAID storage system, striping file data across disks on multiple computers together with a log-structuring technique.
Frangipani • A highly scalable distributed file system developed and deployed at the Digital Systems Research Center.
Frangipani (cont’d) • The responsibility for managing files and associated tasks is assigned to hosts dynamically. • All machines see a unified file name space with coherent access to shared updatable files.
Frangipani - System Structure • Two totally independent layers - 1. Petal distributed virtual disk system - Data is stored in a log-structured and striped format in the virtual disk store. - Providing a storage repository - Providing highly available storage that can scale in throughput and capacity as resources are added to it - Petal implements data replication for high availability, obviating the need for Frangipani to do so.
Frangipani - System Structure (cont’d) 2. Frangipani server modules. - Providing names, directories, and files - Providing a file system layer that makes Petal useful to applications while retaining and extending its good properties.
Frangipani - Logging and Recovery • uses write-ahead redo logging of metadata to simplify failure recovery and improve performance. • User data is not logged. • Each Frangipani has its own private log in Petal. • As long as the underlying Petal volume remains available, the system tolerates an unlimited number of Frangipani failures.
Frangipani - Logging and Recovery • Frangipani’s locking protocol ensures that updates requested to the same data by different servers are serialized. • Frangipani ensures that recovery applies only updates that were logged since the server acquired the locks that cover them, and for which it still holds the locks.
Frangipani - Logging and Recovery • Recovery never replays a log describing an update that has already been completed. • For each block that a log record updates, the record contains a description of the changes and the new version number. • During recovery, the changes to a block are applied only if the block version number is less than the record version number.
Frangipani - Logging and Recovery • Frangipani reuses freed metadata blocks only to hold new metadata. • At any time, only one recovery demon is trying to replay the log region of a specific server. • If a sector is damaged such that reading it returns a CRC error, Petal’s built-in replication can recover it.
Frangipani - Logging and Recovery • In both local UNIX and Frangipani, a user can get better consistency semantics by calling fsync at suitable checkpoint.
Frangipani - Synchronization and Cache Coherence • Frangipani uses multiple-reader/single-writer locks to implement the necessary synchronization. • When the lock service detects conflicting lock requests, the current holder of the lock is asked to release or downgrade it to remove the conflict.
Frangipani - Synchronization and Cache Coherence • When a Frangipani crashes, the locks that it owns cannot be released until appropriate recovery actions have been performed. • When a Frangipani’s lease expires, the lock service will ask the clerk on another machine to perform recovery and release all locks belonging to the crashed Frangipani.
Frangipani - Synchronization and Cache Coherence • Petal can continue operation in the face of network partitions, as long as a majority of the Petal remain up and in communication. • The lock service continues operation as long as a majority of lock servers are up and in communication.
Frangipani - Synchronization and Cache Coherence • If a Frangipani server is partitioned away from the lock service, it will be unable to renew its lease. • If a Frangipani server is partitioned away from Petal, it will be unable to read or write the virtual disk.
Frangipani - Adding Servers • The new server need be told which Petal virtual disk to use and where to find the lock service. • The new server contacts the lock service to obtain a lease, determines which portion of the log space to use from the lease identifier.