280 likes | 469 Views
Chubby. Topics. System structure Use and Observations. Readings. “Mike Burrows” The Chubby Lock Service for Loosely-Coupled Distributed Systems. Introduction. Chubby is a lock service Primary Usage Synchronize access to shared resources Other usage
E N D
Topics • System structure • Use and Observations
Readings • “Mike Burrows” The Chubby Lock Service for Loosely-Coupled Distributed Systems
Introduction • Chubby is a lock service • Primary Usage • Synchronize access to shared resources • Other usage • Primary election, meta-data storage, name service • Lock service should be reliable and available
Examples of Use • Help developers deal with coarse-grained synchronization • GFS uses a Chubby lock to appoint a GFS master server • Bigtable uses Chubby to • Discover the servers it controls • Permit clients to find the master • Store starting location of Bigtable data • Store access control lists • GFS, Bigtable use Chubby to store a small amount of meta-data
Chubby: A Distributed Lock Service • Chubby exports a filesystem-like namespace • Clients can lock any file or directory, or populate/query the contents • When a client loses access to the Chubby service it loses its session lease after the lease expiration • When a client loses its session lease, it loses any Chubby locks and open Chubby file handles it may have had
System Structure • Two main components: • server (Chubby cell) • client library • Communicate via RPC • Master elected using Paxos • Master lease
Operation • Clients find the master by sending master location requests to the replicas in the DNS • All requests sent directly to master • Write requests are propagated via the consensus protocol to all replicas • An acknowledgement is sent to the client when the write has reached a majority of servers in the Chubby cell • Read requests are satisfied by the master alone
Chubby File System • Looks like simple UNIX FS: /ls/foo/wombat • All filenames start with /ls (stands for lockservice) • Second component is cell (foo) • Rest of the path is anything you want • No inter-directory move operation • Permissions use ACLs, non-inherited • No symbolic links
Files • Files have version numbers attached • Client opening a file receives handle to file • Clients cache all file data including file-not-found • Locks are advisory – not required to open file • Why not mandatory locks? • Locks represent client-controlled resources; how can Chubby enforce this?
Files • Two modes: • Exclusive (writer) • Shared (reader) • Clients talk to servers periodically or lose their locks
Callbacks • Master notifies clients if files modified, created, deleted, lock status changes • Push-style notifications decrease bandwidth from constant polling
Cache Consistency • Clients cache all file content • Must send respond to Keep-Alive (KA) message from server at frequent interval • KA messages include invalidation requests • Responding to KA implies acknowledgement of cache invalidation • Modification only continues after all caches invalidated or KA time out
Client Sessions • Sessionsmaintained between client and server • Keep-alive messages required to maintain session every few seconds • If session is lost, server releases any client-held handles. • What if master is late with next keep-alive? • Client has its own (longer) timeout to detect server failure
Master Failure • If client does not hear back about keep-alive in local lease timeout, session is in jeopardy • Clear local cache • Wait for “grace period” (about 45 seconds) • Continue attempt to contact master • Successful attempt => ok; jeopardy over • Failed attempt => session assumed lost
Master Failure (2) • If replicas lose contact with master, they wait for grace period (shorter: 4—6 secs) • On timeout, hold new election
Reliability • Started out using replicated Berkeley DB • Now uses custom write-thru logging DB • Entire database periodically sent to GFS • In a different data center • Chubby replicas span multiple racks
Scalability • 90K+ clients communicate with a single Chubby master (2 CPUs) • System increases lease times from 12 sec up to 60 secs under heavy load • Clients cache virtually everything • Data is small – all held in RAM (as well as disk)
Sequencers • Use sequence numbers in interactions using locks • A lock holder may request a sequencer • Sequencer • Describes state of lock immediately after acquisition • Passed by client to servers, servers validate
Discussion • Why not have a library that embodies Paxos? • Planning for high availability is an afterthought • A lock server makes it easier to maintain program structure and communication patterns • Distributed algorithms use quorums to make decisions so they use several replicas • Managing replicas is encapsulated as a service
Summary • Distributed lock service • course-grained synchronization for Google’s distributed systems • Design based on well-known ideas • distributed consensus, caching, notifications, file-system interface • Primary internal name service • Repository for files requiring high availability