640 likes | 854 Views
Principles of Reliable Distributed Systems Lecture 11: Atomic Shared Memory Objects & Shared Memory Emulations. Idit Keidar. Material. Attiya and Welch , Distributed Computing Ch. 9 & 10 Nancy Lynch, Distributed Algorithms Ch. 13 & 17 Linearizability slides adapted from Maurice Herlihy.
E N D
Principles of Reliable Distributed SystemsLecture 11: Atomic Shared Memory Objects & Shared Memory Emulations Idit Keidar
Material • Attiya and Welch, Distributed Computing • Ch. 9 & 10 • Nancy Lynch, Distributed Algorithms • Ch. 13 & 17 • Linearizability slides adapted from Maurice Herlihy
Shared Memory Model • All communication through shared memory! • No message-passing. • Shared memory registers/objects. • Accessed by processes with ids 1,2,… • Note: we have two types of entities: objects and processes
Motivation • Multiprocessors with shared memory • Multi-threaded programs • Distributed shared memory (DSM) • Abstraction for message passing systems – we will see how to: • Emulate shared memory in message passing systems • Use shared memory for consensus and state machine replication
FIFO Queue: Enqueue Method q.enq( ) Process
FIFO Queue: Dequeue Method q.deq()/ Process
Sequential Objects • Each object has a state • Usually given by a set of fields • Queue example: sequence of items • Each object has a set of methods • Only way to manipulate state • Queue example: enqand deq methods
invocation 12:00 response 12:01 Method call q.enq(...) void Methods Take Time time
Split Method Calls into Two Events • Invocation • Method name & args • q.enq(x) • Response • Result or exception • q.enq(x) returns void • q.deq() returns x • q.deq() throws empty
A Single Process (Thread) • Sequence of events • First event is an invocation • Alternates matching invocations and responses • This is called a well-formedinteraction
Concurrent Methods Take Overlapping Time Method call Method call Method call time
Concurrent Objects • What does it mean for a concurrent object to be correct? • What is a concurrent FIFO queue? • FIFO means strict temporal order • Concurrent means ambiguous temporal order • Help!
Sequential Specifications • Precondition, say for q.deq(…) • Queue is non-empty • Postcondition: • Returns & removes first item in queue • You got a problem with that?
Concurrent Specifications • Naïve approach • Object has n methods • Must specify O(n2) possible interactions • Maybe more If the queue is empty and then enq begins and deq begins after enq(x) begins but before enq(x) ends and then enq returns before deq then… • Linearizability: same as it ever was
Linearizability • Each method should – • “Take effect” • Effect defined by the sequential specification • Instantaneously • Take 0 time • Between its invocation and response events • Real-time order • Pending method (invocation and no response) can either occur after its invocation or not at all
Linearization • A linearizationof a concurrent execution is • A sequential execution • Each invocation is immediately followed by its response • Satisfies the object’s sequential specification • Looks like • Responses to all invocations are the same as in • Responses to pending invocations in may be added • Preserves real-time order • Each invocation-response pair occurs between the corresponding invocation and response in
Linearizability and Atomicity • A concurrent execution that has a linearization is linearizable • An object that has only linearizable executions is atomic
Why Linearizability? • “Religion”, not science • Scientific justification: • Facilitates reasoning • Nice mathematic properties • Common-sense justification • Preserves real-time order • Matches my intuition (sorry about yours)
time Example q.enq(x) q.deq(y) q.enq(y) q.deq(x) time
time Example q.enq(x) q.deq(y) q.enq(y)
time Example q.enq(x) q.deq(x) time
time Example q.enq(x) q.deq(y) q.enq(y) q.deq(x) time
Read/Write Variable Example write(1) happened after write(0) write(0) read(1) read(0) write(1) time time
Read/Write Variable Example write(0) read(1) write(2) read(1) write(1) write(1) already happened time time
Read/Write Variable Example write(0) read(1) write(2) read(2) write(1) time time
Concurrency • How much concurrency does linearizability allow? • When must a method invocation block? • Focus on total methods • Defined in every state • Why?
Concurrency • Question: when does linearizability require a method invocation to block? • Answer: never! • Linearizability is non-blocking
Non-Blocking Theorem If method invocation Aq.invoc() is pending in linearizable history H, then there exists a response A q:resp() such that H + A q:resp() is legal
Note on Non-Blocking • A given implementation of linearizability maybe blocking • The property itself does not mandate it • For every pending invocation, there is always a possible return value that does not violate linearizability • The implementation may not always know it…
Atomic Objects • An object is atomicif all of its concurrent executions are linearizable • What if we want an atomic operation on multiple objects?
Serializability • A transaction is a finite sequence of method calls • A history is serializable if transactions appear to execute serially • It is strictly serializable if the order is also compatible with real-time • Used in databases, more recently, transactional memory
Serializability is Blocking Transaction x.read(0) y.write(1) deadlock y.read(0) x.write(1) Transaction
Comparison • Serializability appropriate for • Fault-tolerance • Multi-step transactions • Linearizability appropriate for • Single objects • Multiprocessor synchronization
Critical Sections • Easy way to implement linearizability • Take sequential object • Make each method a critical section • Like synchronized methods in Java™ • Problems? • Blocking • No concurrency
Linearizability Summary • Linearizability • Operation takes effect instantaneously between invocation and response • Uses sequential specification • No O(n2) interactions • Non-Blocking • Never required to pause method call • Granularity matters
Atomic Register Emulation in a Message-Passing System [Attiya, Bar-Noy, Dolev]
Distributed Shared Memory (DSM) • Can we provide the illusion of atomic shared-memory registers in a message-passing system? • In an asynchronous system? • Where processes can fail?
Liveness Requirement • Wait-freedom: every operation by a correct process p eventually completes • In a finite number of p’s steps • Regardless of steps taken by other processes • In particular, the other processes may fail or take any number of steps between p’s steps • But p must be given a chance to take as many steps as it needs. (Fairness).
Register • Holds a value • Can be read • Can be written • Interface: • intread(); /* returns a value */ • void write(int v); /* returns ack */
Take I: Failure-Free Case • Each process keeps a local copy of the register • Let’s try state machine replication • Step1: Implement atomic broadcast • How? • Recall: atomic broadcast service interface: • broadcast(m) • deliver(m)
Emulation with Atomic Broadcast (Failure-Free) • Upon client request (read/write) • Broadcast (abcast) the request • Upon deliver write request • Write to local copy of register • If from local client, return ack to client • Upon deliver read request • If from local client, return local register value to client • Homework questions: • Show that the emulated register is atomic • Is broadcasting reads required for atomicity?
What If Processes Can Crash? • Does the same solution work?
ABD: Fault-Tolerant Emulation[Attiya, Bar-Noy, Dolev] • Assumes up to f<n/2 processes can fail • Main ideas: • Store value at majority of processes before write completes • read from majority • read intersects write, hence sees latest value
Take II: 1-Reader 1-Writer (SRSW) • Single-reader – there is only one process that can read from the register • Single-writer – there is only one process that can write to the register • The reader and writer are just 2 processes • The other n-2 processes are there to help
Trivial Solution? • Writer simply sends message to reader • When does it return ack? • What about failures? • We want a wait-freesolution: • If the reader (writer) fails, the writer (reader) should be able to continue writing (reading)
SRSW Algorithm: Variables • At each process: • x, a copy of the register • t, initially 0, unique tag associated with latest write
SRSW AlgorithmEmulating Write • To perform write(x,v) • choose tag >t • set x ← v; t ← tag • send (“write”, v, t) to all • Upon receive (“write”, v, tag) • if (tag > t) then set x ← v; t ← tag fi • send (“ack”, v, tag) to writer • When writer receives (“ack”, v, t) from majority (counting an ack from itslef too) • return ack to client
SRSW AlgorithmEmulating Read • To perform read(x,v) • send (“read”) to all • Upon receive (“read”) • send (“read-ack”, x, t) to reader • When reader receives (“read-ack”, v, tag) from majority (including local values of x and t) • choose value v associated with largest tag • store these values in x,t • return x
Does This Work? • Only possible overlap is between read and write • why? • When a read does not overlap any write – • It reads at least one copy that was written by the latest write (why?) • This copy has the highest tag (why?) • What is the linearization order when there is overlap between read and write? • What if 2 reads overlap the same write?