390 likes | 648 Views
Replication Management. Motivations for Replication. Performance enhancement Increased availability Fault tolerance. General Requirements. Replication transparency Consistency. An Architecture for Replication Management.
E N D
Motivations for Replication • Performance enhancement • Increased availability • Fault tolerance
General Requirements • Replication transparency • Consistency
An Architecture for Replication Management Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.
Phases of Request Processing • Issuance: unicast or multicast (from the front end to replica managers) • Coordination • Execution • Agreement • Response * The ordering varies for different systems.
Services for Process Groups Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.
View-Synchronous Group Communications Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.
Sequential Consistency • The one-copy semantics of the replicated objects is respected. • The order of operations is preserved for each client.
The Primary-Backup Model Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.
Active Replication Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.
The Gossip Architecture • A framework for providing high availability of service through lazy replication • A request normally executed at one replica • Replicas updated by lazy exchange of gossip messages (containing most recent updates).
Operations in a Gossip Service Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.
Timestamps • Each front end keeps a vector timestamp reflecting the latest version accessed. • The timestamp is attached to every request sent to a replica. • Two front ends may exchange messages directly; these messages also carry timestamps. • The merging of timestamps is done as usual.
Timestamps (cont’d) • Each replica keeps a replica timestamp representing those updates it has received. • It also keeps a value timestamp, reflecting the updates in the replicated value. • The replica timestamp is attached to the reply to an update, while the value timestamp is attached to the reply to a query.
Timestamp Propagations Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.
The Update Log • Every update, when received by a replica, is recorded in the update log of the replica. • Two reasons for keeping a log: * The update cannot be applied yet; it is held back. * It is uncertain if the update has been received by all replicas. • The entries are sorted by timestamps.
The Executed Operation Table • The same update may arrive at a replica from a front end and in a gossip message from another replica. • To prevent an update from being applied twice, the replica keeps a list of identifiers of the updates that have been applied so far.
A Gossip Replica Manager Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.
Processing Query Requests • A query request q carries a timestamp q.prev, reflecting the latest version of the value that the front end has seen. • Request q can be applied (i.e., it is stable) if q.prev valueTS (the value timestamp of the replica that received q). • Once q is applied, the replica returns the current valueTS along with the reply.
Processing Update Requests • For an update u (not a duplicate), replica i * increments the i-th element of its replica timestamp replicaTS by one, * adds an entry to the log with a timestamp ts derived from u.prev by replacing the i-th element with that of replicaTS, and * return ts to the front end immediately. • When the stability condition u.prev valueTS holds, update u is applied and u.prev is merged with valueTS.
Processing Gossip Messages • For every gossip message received, a replica does the following: * Merge the arriving log with its own; duplicated updates are discarded. * Apply updates that have become stable. • A gossip message need not contain the entire log, if it is certain that some of the updates have been seen by the receiving replica.
Updates in Bayou Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.
About Bayou • Consistency guarantees • Merging of updates • Dependency checks • Merge procedures
Coda vs. AFS • More general replication • Greater tolerance toward server crashes • Allowing disconnected operations
Transactions with Replicated Data • A replicated transactional service should appear the same as one without replicated data. • The effects of transactions performed by various clients on replicated data are the same as if they had been performed one at a time on single data items; this property is called one-copy serializability.
Transactions with Replicated Data (cont’d) • Failures should be serialized with respect to transactions. • Any failure observed by a transaction must appear to have happened before the transaction started.
Schemes for One-Copy Serializability • Read one/write all • Available copies replication • Schemes that also tolerate network partitioning: * available copies with validation * quorum consensus * virtual partition
Client + front end Client + front end U T deposit(B,3); getBalance(A) B Replica managers Replica managers A A B B B A Transactions on Replicated Data Source: Instructor’s guide for G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.
Available Copies Replication • A client's read request on a logical data item may be performed by any available replica, but a client's update request must be performed by all available replicas. • A local validation procedure is required to ensure that any failure or recovery does not appear to happen during the progress of a transaction.
Client + front end T U Client + front end getBalance(B) deposit(A,3); getBalance(A) Replica managers deposit(B,3); B M Replica managers B B A A N P X Y Available Copies Replication (cont’d) Source: Instructor’s guide for G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.
Network Partition Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.
Available Copies with Validation • The available copies algorithm is applied within each partition. • When a partition is repaired, the possibly conflicting transactions that took place in the separate partitions are validated. • If the validation fails, some of the transactions have to be aborted.
Quorum Consensus Methods • One way to ensure consistency across different partitions is to make a rule that operations can only be carried out within one of the partitions. • A quorum is a subgroup of replicas whose size gives it the right to execute operations. • Version numbers or timestamps may be used to determine whether copies of the data item are up to date.
An Example for Quorum Consensus Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.
Two Network Partitions Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.
Virtual Partition Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.
Overlapping Virtual Partitions Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.
Creating Virtual Partitions Source: G. Coulouris et al., Distributed Systems: Concepts and Design, Third Edition.