380 likes | 488 Views
Logical Concurrency Control Form Sequential Proofs. By: Deshmukh , Ramalingam , Ranganath and Vaswani Presented by: Omer Toledano. Overview. Using sequential proof to develop locking schemes for concurrency control. Improve it to achieve linearazibility. Example – Compute with Cache.
E N D
Logical Concurrency Control Form Sequential Proofs By: Deshmukh, Ramalingam, Ranganathand Vaswani Presented by: Omer Toledano
Overview • Using sequential proof to develop locking schemes for concurrency control. • Improve it to achieve linearazibility
Example – Compute with Cache • Assume we have a function f, that we are trying to calculate. • F is a pretty computational intensive function, so we decided to preserve cache for the last result
Example – Compute with Cache • Specification: • We want to create a function called “compute” that will return f(num) • The implementation of “Compute” will use cache for the last result to improve performance
Example - Code intlastNum = 0; intlastRes = f(0); /* @return f(num) */ int Compute( num) { intres; if(lastNum == num) { res = lastRes; } else { res = f(num); lastNum= num; lastRes= res; } returnres; }
Proving Specifications – True Branch intlastNum = 0; intlastRes = f(0); /* @return f(num) */ int Compute( num) { intres; // lastRes == f(lastNum) if(lastNum == num) { // lastRes == f(lastNum) && lastNum == Num res = lastRes; // lastRes == f(lastNum) && lastNum == Num && res == lastRes } else{ … } // res == f(num) && lastRes == f(lastNum) returnres; }
Proving Specification – False Branch intlastNum = 0; intlastRes = f(0); /* @return f(num) */ int Compute( num) { intres; // lastRes == f(lastNum) if(lastNum == num) { … } else { // lastRes == f(lastNum) && lastNum != num res = f(num); // res == f(num) lastNum= num; // res == f(num) && lastNum == num lastRes= res; // res == f(num) && lastRes == res && lastNum == num } // res == f(num) && lastRes == f(lastNum) returnres; }
Is this function thread safe? • No! Compute(5) Consider: Compute(5) Compute(7) intlastNum = 0; intlastRes = f(0); /* @return f(num) */ int Compute( num) { intres; if(lastNum == num) { res = lastRes; } else { res = f(num); lastNum= num; lastRes= res; } returnres; }
Consider: Compute(5) Compute(5) Compute(7) intCompute(5) { intres; // lastRes == f(lastNum) if(lastNum == num) { // lastRes == f(lastNum) && lastNum== Num res = lastRes; } else{ // Compute(7) res = f(num); // res == f(num) lastNum= num; // res == f(num) && lastNum == num lastRes= res;//res == f(7) } returnres; } In this scenario the result of the second Compute(5) would be wrong!
How would you fix that? • intCompute( num) { • int res; • // acquire(l) • if (lastNum == num) { • res = lastRes; • } else { • // release(l) • res = f(num); • // acquire(l) • lastNum = num; • lastRes= res; • } • // release(l) • return res; • }
What changed in the concurrent setting? • On every stage we asserted a set of predicates based on precondition and current command. • In the concurrent setting we saw that some of the predicates was invalidated while executing the command, thus yielded a wrong answer.
Goals • We want to find a way to transform sequentially correct code to concurrently correct code using the same proof.
Motivation • It’s much easier to program a sequential correct program than a concurrent one. So we’ll be able to automate the “thread proofing” process. • Sequential proofs can shed light on the “true” critical sections and what makes them “critical” (predicate invalidation), and hopefully achieve smaller critical sections.
Algorithm - Idea • Define a set of locks that corresponds to the predicates generated by the sequential proof. • Let’s think about the program as a graph were the vertices are a conjunction of predicates required at this point of the program and the edges are program commands.
Algorithm – Idea Cont. • Let’s assume we are on some point of the program, and assume we have 2 vertices u,v and e = (u,v). • We will acquire all the locks corresponding to new predicates on v • We will release every lock that is not needed anymore on v.
Algorithm - Example • int Compute( num) { • int res; • // lastRes == f(lastNum) • if (lastNum == num) { • (u)// lastRes == f(lastNum) && lastNum == num • (e)res = lastRes; • (v)// lastRes == f(lastNum) && lastNum == num && res == lastRes On v we only add one predicate (res == lastRes), so we have to take its lock before executing the command e. (u) // lastRes == f(lastNum) && lastNum == num(e) /* acquire (l:res==lastRes) */ res = lastRes; (v) // lastRes == f(lastNum) && lastNum == num && res == lastRes
Correctness of Algorithm • Input: a library L with embedded assertions satisfied by all sequential executions of L. • Output: a Library L’ obtained by augmenting L with concurrency control such that every execution of L’ is “safe”.
Is that enough? No! what about deadlocks? It can happen when: While holding a lock on p we are trying to get a lock on q At some point when holding the lock on q we are trying to get the lock on p. This will cause a deadlock since we are already holding the lock on p. To solve this we will define an equivalence relation that merges all those locks into one merged lock.
Algorithm – are all locks necessary? // lastRes == f(lastNum) if (lastNum == num) { // acquire l: lastNum == num // lastRes == f(lastNum) && lastNum == num This lock is redundant since it’s always acquired when another lock is acquired and released when another lock is released. • int Compute( num) { • int res; • // acquire l: lastRes == f(lastNum) • // lastRes == f(lastNum) • if (lastNum == num) { • // acquire l: lastNum == num • res = lastRes; • } else {
Optimizations • As said in the last slide the algorithm can introduce redundant locking, e.g generate a lock l that is always held whenever a lock q is acquired. • Also if we have a predicate that is never invalidated then we won’t need to acquire it before executing commands.
Optimizations – Cont. • Use read-write locks: • When a thread wants to “preserve” a predicate it can acquire a read lock (with more threads) • If it want to invalidate the predicate it needs to acquire a “write” lock.
Another problem? intx = 0; int Increment() { inttmp; // x == x_in tmp = x; tmp= tmp + 1; // going to invalidate x == x_in x = tmp; returntmp; }
Another problem? int x = 0; int Increment() { inttmp; // acquire(l) tmp= x; // release(l) tmp= tmp + 1; // acquire(l) x = tmp; // release(l) returntmp; }
What can happen? Increment() - returns 0 Increment() - returns 0 After both increment x equals one In general we can have “dirty reads” and “lost updates”
Improvement • We will change our locking scheme to solve the previous example problem. If at some point a branch that starts from a program line is going to falsify a predicate we are going to acquire that lock too. int x = 0; int Increment() { inttmp; // acquire(l) tmp= x; tmp= tmp + 1; x = tmp; // release(l) returntmp; }
Is that enough? • What about return values? intx = 0, y = 0; IncX() { // acquire l: x == x_in x = x + 1; (ret11, ret12) = (x, y); // release l: x == x_in } IncY() { // acquire l: y == y_in y = y + 1; (ret21, ret22) = (x, y); // release l: y == y_in } IncX() – return (1,1) IncY() – return (1,1)
This is not linearizable intx = 0, y = 0; IncX() { // acquire l: x == x_in x = x + 1; (ret11, ret12) = (x, y); // release l: x == x_in } IncY() { // acquire l: y == y_in y = y + 1; (ret21, ret22) = (x, y); // release l: y == y_in } The values from the two calls should be different
Solution • We will have to determine whether the execution of a statement s can potentially affect the return-value of another procedure invocation. • We do so by calculating if a statement s can break some procedure return value, and lock it accordingly.
Results • After using real world example and benchmarks they showed that their programs achieved same or better results than human created synchronization. • The improvement was with introducing more locks that helped minimizing the critical sections and separate them by different locks
Results • In the last section they produced extension to allow linearizability with respect to a sequential specification, which is a weaker requirement that permits more concurrency than notions of atomicity. • Achieving linearizability without two phase locking.
Conclusions • This algorithm help us automate the “thread proofing” process and achieve good results. • Help us to get better understanding about the root cause for the critical sections and separate them with different locks for more concurrency.
Conclusions • Also the logical point of view helped us to understand which invariants need to be preserved.