310 likes | 453 Views
Causal Consistency Without Dependency Check Messages. Willy Zwaenepoel. Introduction. Geo-replicated data stores. Data center. Data center. Data center. Data center. Data center. Geo-replicated data centers Full replication between data centers Data in each data center partitioned.
E N D
Causal Consistency WithoutDependency Check Messages Willy Zwaenepoel
Geo-replicated data stores Data center Data center Data center Data center Data center Geo-replicated data centers Full replication between data centers Data in each data center partitioned
The Consistency Dilemma • Strong Consistency • Causal Consistency • Eventual Consistency • synchronous replication • all replicas share same consistent view • sacrifice availability • asynchronous replication • all replicas eventually converge • sacrifice consistency, but … • … replication respects causality • asynchronous replication • all replicas eventually converge • sacrifice consistency
Can we close the throughput gap? The answer is: yes, but there is a price
How is causality enforced? • Each update has associated dependencies • What are dependencies? • metadata to establish causality relations between operations • used only for data replication • Internaldependencies • previous updates of the same client session • Externaldependencies • read updates of other client sessions
Internal dependencies Alice W(x = 1) W(y = 2) US Datacenter Europe Datacenter R(y) R(y) y = 2 y = 0 Bob Example of 2 users performing operations at different datacenters at the same partition
External dependencies Alice Bob W(x = 1) R(x) W(y = x + 1) US Datacenter Europe Datacenter R(y) R(y) y = 2 y = 0 Charlie Example of 3 users performing operations at datacenters at the same partition x = 1
How dependencies are tracked & checked Client Read(A) Write(C, A+B) Read(B) … Partition 0 Partition 1 Partition N US Datacenter Europe Datacenter … Partition 0 Partition 1 Partition N DepCheck(B) DepCheck(A) • In current implementations • COPS [SOSP ’11], ChainReaction [Eurosys ‘13], Eiger[NSDI ’13], Orbe [SOCC ’13] • DepCheck(A) – “Do you have A installed yet?”
Encoding of dependencies • COPS [SOSP ’11], ChainR. [Eurosys ‘13], Eiger [NSDI ’13] • “direct” dependencies • Worst case: O( reads before a write ) • Orbe [SOCC ‘13] • Dependency matrix • Worst case: O( partitions )
The main issues • Metadata size is considerable • for both storage and communucation • Remote dependency checks are expensive • multiple partitions are queried for each update
Getting rid of external dependencies • Partitions serve only fully replicated updates • Replication Confirmation messages broadcast periodically • External dependenciesare removed • replication information implies dependency installation • Internal dependencies are minimized • we only track the previous write • requires at most one remote check • zero if write is local, one if it is remote
The new replication workflow Alice Replication Confirmation (periodically) W(x = 1) US Datacenter Europe Datacenter Asia Datacenter R(x) R(x) Bob Example of 2 users performing operations at different datacenters at the same partition x = 0 x = 1
Reading your own writes • Clients need not wait for the replication confirmation • they can see their own updates immediately • other clients’ updates are visible once they are fully replicated • Multiple logical update spaces Replication update space (not yet visible) Alice’s update space (visible to Alice) Bob’s update space (visible to Bob) … Global update space (fully visible)
The price paid: update visibility increased • With new implementation ~ max ( network latency from origin to furthest replica + network latency from furthest replica to destination + interval of replication information broadcast ) • With conventional implementation: ~ network latency from origin to destination
Is it possible? Only make update visible when one can locally determine no causally preceding update will become visible later at another partition
How to do that? • Encode causality by means of Lamport clock • Each partition maintains its Lamport clock • Each update is timestamped with Lamport clock • Update visible • update.timestamp ≤ min( Lamport clocks ) • Periodically compute minimum
The price paid: update visibility increased • With new implementation ~ max ( network latency from origin to furthest replica + network latency from furthest replica to destination + interval of minimum computation ) • With conventional implementation: ~ network latency from origin to destination
How to deal with “stagnant” Lamport clock? • Lamport clock stagnates if no update in a partition • Combine • Lamport clock • Loosely synchronized physical clock • (easy to do)
More on loosely synchronized physical clocks • Periodically broadcast clock • Reduces update visibility latency to • Network latency from furthest replica to destination + maximum clock skew + clock broadcast interval
Can we close the throughput gap? The answer is: yes, but there is a price The price is increased update visibility