280 likes | 790 Views
Cloud Data Management. Inexpensive Scalable Information Access. Many Internet applications need to access data for millions of concurrent users Relational DBMS technology cannot scale to these workloads using commodity hardware
E N D
Inexpensive Scalable Information Access • Many Internet applications need to access data for millions of concurrent users • Relational DBMS technology cannot scale to these workloads using commodity hardware • Need for low cost scalable DBMSs resulted in the advent of the key-value stores (e.g., Google’s Bigtable, Yahoo!’s PNUTS, and Amazon’s Dynamo)
Key-value Stores Scalability and availability is more important than rich functionality • Scalability: Scale out to thousands of commodity servers • Availability: Data replicated across data centers to ensure high availability of user data in the presence of failures
Key-value Data Model • Primary abstraction is a table of rows or key-value pair • Each row is identified by a unique key, and the value can vary in its structure • Keys are arbitrary strings which can be up to 64K bytes • Arbitrary number of columns per row • Arbitrary data type for each column (i.e., data validation done by applications) • An interpreted binary string , i.e., a Blob • Columns with their own attribute as in relational DBMSs • Multiple versions of each row can be maintained and accessed through timestamps
From Needs to Constraints • Retrieval • (row, column, timestamp) lookup only • In some systems, simple relational operations are supported such as selection and projection • Update • Updates and deletes need to specify the primary key • Atomicity • Atomic Read and write only possible at row level
Scalability & Fault Tolerance Consideration • Logical entity can be effectively represented as a single row • Each row typically resides in a single server, and data access is restricted to a single key • Application-level data manipulation is restricted to a single computer obviating the need for multi-server coordination and synchronization • Rationale: (1) requests generally distributed throughout the data set, (2) impact of failure limited to the rows served by the failed server
Cluster Management – Master-based • A centralized master server keeps track of all data servers using a highly fault-tolerant (FT) service • This FT service keeps track of the data stored at the different servers • When a data server fails, FT service reports this failure and the master can reassign the data to other servers • If the master fails, a new master is elected to take over
Cluster Management – Decentralized • Typically based on gossip messages exchanged among the servers continuously • These messages contain relevant performance measurements • The failure of a server is detected when a gossip message from that server is missing • This approach is more fault tolerant; but it incurs message overhead
Google’s Bigtable • A table is a set of tablets • A master server allocates tablets among tabet servers and is responsible for load balancing Master Chubby node Tablet Server i Tablet Server j Logical view Tablet 1 Tablet 2 Tablet 3 Tablet 4 Tablet 5 Tablet 6 Distributed file system GFS Chunk Server GFS Chunk Server SSTable 1 SSTable 2 SSTable 3 SSTable 4 SSTable 5 SSTable 6 Physical layout SSTable 4 (replica) SSTable 2 (replica) A tablet is stored as a collection of SSTable files Tablet, logically represented as a key range, is the unit of distribution and load balancing
Tablets • A logical table is divided into multiple tablets, each hold an interval of table rows • Each tablet is stored in one or more SSTable files • When a tablet grows beyond a certain size, it is split into two new tablets
Google’s Bigtable - Chubby • Highly fault tolerant - consisting of five active replicas. Service is live when majority of replicas are running • It is used for managing the tablet servers Determines which server to hold a tablet Master Chubby node Tablet Server i Tablet Server j Logical view Tablet 1 Tablet 2 Tablet 3 Tablet 4 Tablet 5 Tablet 6 GFS Chunk Server GFS Chunk Server SSTable 1 SSTable 2 SSTable 3 SSTable 4 SSTable 5 SSTable 6 Physical layout SSTable 4 (replica) SSTable 2 (replica) A tablet is stored as a collection of SSTables Replication is handled by GFS
Google’s Bigtable - Column Families • Related columns stored in fixed number of families (the unit for data colocation and access at the storage layer) • Permissions can be applied at family level to grant access to different applications
Google’s Bigtable - Chubby • The master and every tablet server obtains a timed lease with Chubby that must be periodically renewed • A server can carry out its responsibilities only if it has an active lease • Every tablet server periodically reports to the master using heartbeat messages (that also contain the load statistics) • Master detects failures based on the heartbeat messages and uses the statistics for load balancing
Google’s Bigtable – Server Failure Master Chubby node If this server fails Informs Server i to take over Tablet 4 Tablet Server i Tablet Server j Logical view Tablet 1 Tablet 2 Tablet 3 Tablet 4 Tablet 4 Tablet 5 Tablet 6 GFS Chunk Server GFS Chunk Server SSTable 1 SSTable 2 SSTable 3 SSTable 4 SSTable 5 SSTable 6 SSTable 4 (replica) Physical layout SSTable 2 (replica)