1 / 59

Enhancements to NFS

Enhancements to NFS. 王信富 R88725039 2000/11/6. Introduction. File system modules Directory module File module Access control module File access module Block module. Introduction. Distributed file system requirements Transparency Access transparency location transparency

idana
Download Presentation

Enhancements to NFS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Enhancements to NFS 王信富 R88725039 2000/11/6

  2. Introduction • File system modules • Directory module • File module • Access control module • File access module • Block module

  3. Introduction • Distributed file system requirements • Transparency • Access transparency • location transparency • Scaling transparency • Consistency • Security

  4. Sun NFS architecture

  5. Andrew file system architecture

  6. Mobility enhancement • Mobile file system (MFS)

  7. Mobile file system (MFS) • Client modules

  8. Mobile file system (MFS) • Proxy modules • Source: Maria-Teresa Segarra IRISA Research Institute Campus de Beaulieu

  9. Mobility enhancement cont. • NFS/M • Enable the mobile user to access the information regardless of • the location of the user • the state of the communication channel • the state of the data server

  10. NFS/M architecture

  11. NFS/M modules

  12. NFS/M modules • Cache Manager (CM) • All the file system operations to any cached objects in the local disk cache are managed by the CM • It functions only in the connected phase

  13. NFS/M modules • Proxy Server (PS) • Emulates the functionalities of the remote NFS server by using the cached file system objects in the local disk cache • It functions in the disconnected phase

  14. NFS/M modules • Reintegrator (RI) • Propagates the changes of the data objects in the local disk cache performed during the disconnected period back to the NFS server • Three tasks for the RI • Conflict detection • Update propagation • Conflict resolutions

  15. NFS/M modules • Data Prefetcher (DP) • Improving data access performance • Data prefetching techniques can be classified into two categories • Informed prefetching • Predictive prefecting

  16. NFS/M modules

  17. Phase of the NFS/M • NFS/M client maintains an internal state which terms as phase, which is used to indicate how file system service provided under different conditions of network connectivity • Three phases: • Connected phase • Disconnected phase • Reintegration

  18. Phase of the NFS/M • John C.S. Lui , Oldfield K.Y. So , T.S. Tam, Department of Computer Science & Engineering

  19. Case : wireless andrew • It builds on the university’s wired network infrastructure,which currently provides 10/100Mb/s Ethernet service • To supply high-speed wireless service to the campus, lucent WaveLAN equipments have installed • For wireless access off campus or otherwise out of the range of the WaveLAN network, using cellular digital packet data

  20. Case : wireless andrew • Wireless Andrew [mobile computing for university campus] 1999 IEEE

  21. Reference URL • http://csep1.phy.ornl.gov/nw/node3.html • http://www.dqc.org/~chris/tcpip_ill/nfs_netw.htm • http://www.coda.cs.cmu.edu/ • http://www.uwsg.iu.edu/usail/network/nfs/overview.html • http://www.netapp.com/tech_library/nfsbook.html

  22. Scalibility about NFS 學生:朱漢農 R88725032 2000/11/6

  23. NFS - Scalability • AFS - Scalability • NFS Enhancement -Spritely NFS, NQNFS, WebNFS, NFS Version4 • AFS Enhancement - RAID, LSF, xFS • Frangipani

  24. NFS - Scalability • The performance of a single server can be increased by the addition of processors, disks and controllers. • When the limits of that process are reached, additional servers must be installed and the filesystems must be reallocated between them.

  25. NFS - Scalability (cont’d) • The effectiveness of that strategy is limited by the existence of ‘hot spot’ files. • When loads exceed the maximum performance, a distributed file system that supports replication of updatable files, or one that reduces the protocol traffic by the caching of whole files, may offer a better solution.

  26. AFS - Scalability • The differences between AFS and NFS are attributable to the identification of scalability as the most important design goal. • The key strategy is the caching of whole file in client nodes.

  27. AFS - Scalability (cont’d) • Whole-file serving: The entire contents of directories and files are transmitted to client computers by AFS servers. • Whole-file caching: Once a copy of a file has been transferred to a client computer, it is stored in a cache on the local disk. • The cache is permanent, surviving reboots of the client computer.

  28. NFS enhancement - Spritely NFS • is an implementation of the NFS protocol with the addition of open and close calls. • The parameters of the Sprite open operation specify a mode and include counts of the number of local processes that currently have the file open for reading and for writing. • Spritely NFS implements a recovery protocol that interrogates a list of clients to recover the full open files table.

  29. NFS enhancement - NQNFS • maintains similar client-related state concerning open files, but it uses leases to aid recovery after a server crash. • Callbacks are used in a similar manner to Spritely NFS to request clients to flush their caches when a write request occurs.

  30. NFS enhancement - WebNFS • makes it possible for application programs to become clients of NFS servers anywhere in the Internet (using the NFS protocol directly) • implementing Internet applcations that share data directly, such as multi-user games or clients of large dynamics databases.

  31. NFS enhancement - NFS version 4 • will include the features of WebNFS • the use of callback or leases to maintain consistency • on-the-fly recovery • Scalability will be improved by using proxy servers in a manner analogous to their use in the Web.

  32. AFS enhancement • RAID • Log-structured file storage • xFS • implements a software RAID storage system, striping file data across disks on multiple computers together with a log-structuring technique.

  33. Frangipani • A highly scalable distributed file system developed and deployed at the Digital Systems Research Center.

  34. Frangipani (cont’d) • The responsibility for managing files and associated tasks is assigned to hosts dynamically. • All machines see a unified file name space with coherent access to shared updatable files.

  35. Frangipani - System Structure • Two totally independent layers - 1. Petal distributed virtual disk system - Data is stored in a log-structured and striped format in the virtual disk store. - Providing a storage repository - Providing highly available storage that can scale in throughput and capacity as resources are added to it - Petal implements data replication for high availability, obviating the need for Frangipani to do so.

  36. Frangipani - System Structure (cont’d) 2. Frangipani server modules. - Providing names, directories, and files - Providing a file system layer that makes Petal useful to applications while retaining and extending its good properties.

  37. Frangipani

  38. Frangipani

  39. Frangipani

  40. Frangipani

  41. Frangipani - Logging and Recovery • uses write-ahead redo logging of metadata to simplify failure recovery and improve performance. • User data is not logged. • Each Frangipani has its own private log in Petal. • As long as the underlying Petal volume remains available, the system tolerates an unlimited number of Frangipani failures.

  42. Frangipani - Logging and Recovery • Frangipani’s locking protocol ensures that updates requested to the same data by different servers are serialized. • Frangipani ensures that recovery applies only updates that were logged since the server acquired the locks that cover them, and for which it still holds the locks.

  43. Frangipani - Logging and Recovery • Recovery never replays a log describing an update that has already been completed. • For each block that a log record updates, the record contains a description of the changes and the new version number. • During recovery, the changes to a block are applied only if the block version number is less than the record version number.

  44. Frangipani - Logging and Recovery • Frangipani reuses freed metadata blocks only to hold new metadata. • At any time, only one recovery demon is trying to replay the log region of a specific server. • If a sector is damaged such that reading it returns a CRC error, Petal’s built-in replication can recover it.

  45. Frangipani - Logging and Recovery • In both local UNIX and Frangipani, a user can get better consistency semantics by calling fsync at suitable checkpoint.

  46. Frangipani - Synchronization and Cache Coherence • Frangipani uses multiple-reader/single-writer locks to implement the necessary synchronization. • When the lock service detects conflicting lock requests, the current holder of the lock is asked to release or downgrade it to remove the conflict.

  47. Frangipani - Synchronization and Cache Coherence • When a Frangipani crashes, the locks that it owns cannot be released until appropriate recovery actions have been performed. • When a Frangipani’s lease expires, the lock service will ask the clerk on another machine to perform recovery and release all locks belonging to the crashed Frangipani.

  48. Frangipani - Synchronization and Cache Coherence • Petal can continue operation in the face of network partitions, as long as a majority of the Petal remain up and in communication. • The lock service continues operation as long as a majority of lock servers are up and in communication.

  49. Frangipani - Synchronization and Cache Coherence • If a Frangipani server is partitioned away from the lock service, it will be unable to renew its lease. • If a Frangipani server is partitioned away from Petal, it will be unable to read or write the virtual disk.

  50. Frangipani - Adding Servers • The new server need be told which Petal virtual disk to use and where to find the lock service. • The new server contacts the lock service to obtain a lease, determines which portion of the log space to use from the lease identifier.

More Related