290 likes | 575 Views
Self-Certifying File Systems (SFS). Presented by Vishal Kher January 29, 2004. References. Self-certifying file system (SFS) David Mazires and M. Frans Kaashoek. Escaping the evils of centralized control with self-certifying pathnames . In Proceedings of the 8th ACM SIGOPS 1998
E N D
Self-Certifying File Systems (SFS) Presented byVishal Kher January 29, 2004
References • Self-certifying file system (SFS) • David Mazires and M. Frans Kaashoek. Escaping the evils of centralized control with self-certifying pathnames. In Proceedings of the 8th ACM SIGOPS 1998 • D. Mazieres, M. Kaminsky, M. Kaashoek and E. Witchel. Separating key management from file system security. SOSP, December 1999 • SFS-based read-only file system • K. Fu, M. Kaashoek and D. Mazieres. Fast and secure distributed read-only file system. OSDI, October 2000
Introduction (2) • FS like NFS and AFS do span the Internet • They do not provide seamless file access • Why is global file sharing (gfs) difficult? • Files are shared across administrative realms • Can you seamlessly access files on CS file server from a machine outside CS administration? • Scale of Internet makes management a nightmare • Every realm might follow its own policy
Introduction (2) • Is there any thing else that hinders gfs? • No one controls the name space • Users cannot trust “a” centralized server • Further who will manage the keys? • A centralized authority cannot manage all the keys • Scale of the Internet • A key management mechanism does not satisfy all • Expensive • CentralizedControl
SFS Goals • Provide global file system image • FS looks the same from every client machine • No notion of administrative realm • Servers grant access to users and not clients • Separate key management from file system • Various key management policies can co-exist • Key management will not hinder setting up new servers • Security • Authentication • Confidentiality and integrity of client-server communication • Verstality and modularity
SFS Overview (1) • Key idea: self-certifying path names • Every SFS file system is accessible as • /sfs/location:HostID • Location is location of the file server e.g., IP address • HostID = (“Hostinfo”,Location,PublicKey) • Every pathname has a public key embedded in it • Self-certifying path-name • /sfs/sfs.umn.cs.edu:vefsdfa345474sfs35/foo • Access file foo located on sfs.umn.cs.edu
SFS Overview (1) • Starting a file server is quite easy • We need IP address, generate HostID, public key, and run sfs software • Automatic mounting • If user references a non-existent pathname in /sfs the SFS client automatically mounts the remote file system • Symbolic link • /umn /sfs/sfs.umn.cs.edu:vefsdfa345474sfs35 • Authentication • sfs provides server and user authentication (details later)
Key Management (1) • sfs doesn’t really care • But it provides some useful key management techniques • Manual key distribution • Admin installs pathname on local disk • CAs • CAs are sfs servers serving symbolic links • /verisign/umn /sfs/sfs.umn.cs.edu:vefsdfa345474sfs35 • If user trusts Verisign’s public key he will trust the path to umn.cs.edu file server
Key Management (2) • Using password • User stores hash of password with the server UMN • Server authenticates user based on the password and downloads the pathname to his local /sfs directory • File access does not involve a central admin.
System Components Kernel • Agents and authserver interact for user authentication • Both are modular and can be replaced • Client program handles server authentication and other activities • Revocation, auto mounting etc. Agent User program NFS Server Authserver Agent MACed, EncryptedTCP Connection NFS Client SFS server SFS client Kernel
Location, HostID Ps Pc, Ps(Kc1,Kc2) Pc(Ks1,Ks2) Protocols: Key Negotiation (1) • sfs client initiates the following every time it sees a new self-certifying pathname • Pc, Ps indicate public keys of client and server resp. SFS server SFS Client • Session keys • Kcs = H(“KCS”, Ps, Ks1, Pc, Kc1) • Ksc = H(“KSC”, Ps, Ks2, Pc, Kc2)
Key Negotiation (2) • Only server can generate Ksc, Kcs • Server posses private key S • Ksc, Kcs used to encrypt and MAC all communication • Pc is changed frequently (every hour) • Forward secrecy • Is this susceptible to replay?
SeqNo, AuthMsg AuthId, SeqNo,Credentials AuthNo SeqNo, AuthMsg AuthMsg AuthInfoSeqNo User Authentication • Performed on user’s first access to a new FS • SFS server has a database mapping user public keys to credentials Authserver • AuthInfo = {Location, Host, SessionID} • SessionId = H(Ksc, Kcs) • Req = {H(Authinfo),SeqNo} • AuthMsg = PU, SigU(Req) • All Communication is secure SFS server SFS Client Agent
Revocation • How to revoke a server’s public key? • Revocation certificate • CR = SigK(Location, PK) • CA stores these certificates • /verisign/revocation/HostID • File named by HostID contains revocation certificate for that HostID • Revocation certificates are self-authenticating • CA need not check the identity of submitters
Performance • End-End performance • SFS is roughly 11 – 16% slower than NFS 3 over UDP • Sprite LFS • Small file create, read and unlink • Read is 3 times slow • Large file ~ 40 MB • Sequential write is 44% slower than NFS 3 over UDP • Sequential read is 145% slower than NFS 3 over UDP
Summary • SFS separates key management from FS • Provides secure global file system by using self-certifying pathnames • Any user can start his file server without going through a central authority • Implementation is quite modular • Significant performance overhead
References • Self-certifying file system (SFS) • David Mazires and M. Frans Kaashoek. Escaping the evils of centralized control with self-certifying pathnames. In Proceedings of the 8th ACM SIGOPS 1998 • D. Mazieres, M. Kaminsky, M. Kaashoek and E. Witchel. Separating key management from file system security. SOSP, December 1999 • SFS-based read-only file system • K. Fu, M. Kaashoek and D. Mazieres. Fast and secure distributed read-only file system. OSDI, October 2000
Motivation • Internet users rely a lot on publicly available data • Software installation • Secure data distribution • Replication, mirror sites not secure • Security expensive to verify • Security holes • Poor or no revocation support
Solution • Consider problem subset read-only data distribution • Apply SFS • Result secure, high performance, scalable read-only file system
Assumptions • Untrusted distribution servers • Trusted clients • Public, read-only data
System Components • sfsrodb • Database generator—creation, updates • sfsrosd • Server—data distribution • Runs on server • Server is self-certifying file system • sfsrocd • Client—data acquirement, verification • Runs on client
System Overview User App. sfsrosd FS sfsrosd sfsrodb NFS client sfsrosd sfsrocd TCP Connection Private key signed replica database
Recursive Hashing (1) • Each data block is hashed • Fixed-size hash computed handle • Used to lookup the block in database • Handles stored in file’s inode • Directories store <name, handle> pairs • Directories and inodes hashed • rootfh is hash of root directory’s inode
Recursive Hashing (1) / metadata H Sign Name, handle Name, handle […] B0 B1 B7 B8 metadata Name, handle H H(B0) H H H(B1) File Handle […] H H(H(B7)..) H(B7) H(B8)
Features • Data verification by default • Data has expiry date – • struct FSINFO stores {date, duration, rootfh} • Directories sorted lexicographically • reduced search time • Opaque directories
Limitations • Database update inefficient • Re-compute handles • Client must keep up with updates • Verification • Walk up all the tree to the root
Conclusions • Read-only data integrity • Content verification costs offloaded to clients • No confidentiality promise! • High availability, performance, scalability