410 likes | 538 Views
Program Verification using Probabilistic Techniques. Sumit Gulwani Microsoft Research Invited Talk: VSTTE Workshop August 2006. Joint work with George Necula and Nebojsa Jojic. Probabilistic Techniques. Used successfully in several areas of computer science.
E N D
Program Verification using Probabilistic Techniques Sumit Gulwani Microsoft Research Invited Talk: VSTTE Workshop August 2006 Joint work with George Necula and Nebojsa Jojic
Probabilistic Techniques • Used successfully in several areas of computer science. • Yields more efficient, precise, even simpler algorithms. • Technique 1: Random Interpretation • Discovers program invariants • Monte Carlo Algorithm: May generate invalid invariants with a small probability. Running time is bounded. • “Random Testing” + “Abstract Interpretation” • Technique 2: Simulated Annealing • Discovers proof of validity/invalidity of a Hoare triple. • Las Vegas Algorithm: Generates a correct proof. Running time is probabilistic. • “Forward Analysis” + “Backward Analysis”
RandomInterpretation = Random Testing + Abstract Interpretation Random Testing: • Test program on random inputs • Simple, efficient but unsound (can’t prove absence of bugs) Abstract Interpretation: • Class of deterministic program analyses • Interpret (analyze) an abstraction (approximation) of program • Sound but usually complicated, expensive Random Interpretation: • Class of randomized program analyses • Almost as simple, efficient as random testing • Almost as sound as abstract interpretation
Example 1 True False * a := 0; b := i; a := i-2; b := 2; True False * c := b – a; d := i – 2b; c := 2a + b; d := b – 2i; assert(c+d = 0); assert(c = a+i)
Example 1: Random Testing True False * • Need to test blue path to falsify second assertion. • Chances of choosing blue path from set of all 4 paths are small. • Hence, random testing is unsound. a := 0; b := i; a := i-2; b := 2; True False * c := b – a; d := i – 2b; c := 2a + b; d := b – 2i; assert(c+d = 0); assert(c = a+i)
Example 1: Abstract Interpretation True False * • Computes invariant at each program point. • Operations are usually complicated and expensive. a := 0; b := i; a := i-2; b := 2; a=0, b=i a=i-2, b=2 a+b=i True False * c := b – a; d := i – 2b; c := 2a + b; d := b – 2i; a+b=i c=2a+b, d=b-2i a+b=i c=b-a, d=i-2b a+b=i, c=-d assert(c+d = 0); assert(c = a+i)
Example 1: Random Interpretation • Choose random values for input variables. • Execute both branches of a conditional. • Combine values of variables at join points. • Test the assertion. True False * a := 0; b := i; a := i-2; b := 2; True False * c := b – a; d := i – 2b; c := 2a + b; d := b – 2i; assert(c+d = 0); assert(c = a+i)
Random Interpretation: Outline • Random Interpretation • Linear arithmetic (POPL 2003) • Uninterpreted functions (POPL 2004) • Inter-procedural analysis (POPL 2005)
Linear relationships in programs with linear assignments • Linear relationships (e.g., x=2y+5) are useful for • Program correctness (e.g. buffer overflows) • Compiler optimizations (e.g., constant and copy propagation, CSE, Induction variable elimination etc.) • “programs with linear assignments” does not mean inapplicability to “real” programs • “abstract” other program stmts as non-deterministic assignments (standard practice in program analysis)
Basic idea in random interpretation Generic algorithm: • Choose random values for input variables. • Execute both branches of a conditional. • Combine the values of variables at join points. • Test the assertion.
a = 2 b = 3 a = 4 b = 1 a = 7(2,4) = -10 b = 7(3,1) = 15 Idea #1: The Affine Join operation • Affine join of v1 and v2 w.r.t. weight w w(v1,v2)´w v1 + (1-w) v2 • Affine join preserves common linear relationships (a+b=5) • It does not introduce false relationships w.h.p. w = 7
a = 2 b = 3 a = 4 b = 1 a = 5(2,4) = -6 b = 5(3,1) = 11 a = 7(2,4) = -10 b = 7(3,1) = 15 Idea #1: The Affine Join operation • Affine join of v1 and v2 w.r.t. weight w w(v1,v2)´w v1 + (1-w) v2 • Affine join preserves common linear relationships (a+b=5) • It does not introduce false relationships w.h.p. • Unfortunately, non-linear relationships are not preserved (e.g. a £ (1+b) = 8) w = 7 w = 5
Geometric Interpretation of Affine Join • satisfies all the affine relationships that are satisfied by both (e.g. a + b = 5) • Given any relationship that is not satisfied by any of (e.g. b=2), also does not satisfy it with high probability : State before the join : State after the join b a + b = 5 (a = 2, b = 3) b = 2 (a = 4, b = 1) a
Example 1 i=3 • Choose a random weight for each join independently. • All choices of random weights verify first assertion • Almost all choices contradict second assertion False True * a := 0; b := i; a := i-2; b := 2; w1 = 5 i=3, a=1, b=2 i=3, a=0, b=3 i=3, a=-4, b=7 True False * c := b – a; d := i – 2b; c := 2a + b; d := b – 2i; i=3, a=-4, b=7 c=11, d=-11 i=3, a=-4, b=7 c=-1, d=1 w2 = 2 i=3, a=-4, b=7 c=23, d=-23 assert (c+d = 0); assert (c = a+i)
Correctness of Random Interpreter R • Completeness: If e1=e2, then R ) e1=e2 • assuming non-det conditionals • Soundness: If e1e2, then R e1 = e2 • error prob. · • j : number of joins • d: size of set from which random values are chosen • k: number of points in the sample • If j = 10, k = 4, d ¼ 232, then error ·
Proof Methodology Proving correctness was the most complicated part in this work. We used the following methodology. • Design an appropriate deterministic algorithm (need not be efficient) • Prove (by induction) that the randomized algorithm simulates each step of the deterministic algorithm with high probability.
Random Interpretation: Outline • Random Interpretation • Linear Arithmetic (POPL 2003) • Uninterpreted functions (POPL 2004) • Inter-procedural analysis (POPL 2005)
Abstraction Problem: Global value numbering a := 5; x := F(a,b); y := F(5,b); z := F(b,a); a := 5; x := a*b; y := 5*b; z := b*a; • x=y and x=z • Reasoning about multiplication is undecidable • only x=y • Reasoning is decidable but tricky in presence of joins • Axiom: If x1=y1 and x2=y2, then F(x1,x2)=F(y1,y2) • Goal: Detect expression equivalence when program operators are abstracted using “uninterpreted functions” • Application: Compiler optimizations, Translation validation
Random Interpretation: Outline • Random Interpretation • Linear arithmetic (POPL 2003) • Uninterpreted functions (POPL 2004) • Inter-procedural analysis (POPL 2005)
Example 1 False True * a := 0; b := i; a := i-2; b := 2; • The second assertion is true in the context i=2. • Interprocedural Analysis requires computing procedure summaries. True False * c := b – a; d := i – 2b; c := 2a + b; d := b – 2i; assert (c + d = 0); assert (c = a + i)
Idea: Keep input variables symbolic True False • Do not choose random values for input variables (to later instantiate by any context). • Resulting program state at the end is a random procedure summary. * a := 0; b := i; a := i-2; b := 2; a=0, b=i w1 = 5 a=i-2, b=2 a=8-4i, b=5i-8 True False * c := b – a; d := i – 2b; c := 2a + b; d := b – 2i; a=8-4i, b=5i-8 c=8-3i, d=3i-8 a=8-4i, b=5i-8 c=9i-16, d=16-9i w2 = 2 a=0, b=2 c=2, d=-2 i=2 a=8-4i, b=5i-8 c=21i-40, d=40-21i assert (c+d = 0); assert (c = a+i)
Experimental measure of error The % of incorrect relationships decreases with increase in • S = size of set from which random values are chosen. • N = # of random summaries used. S N The experimental results are better than what is predicted by theory.
Simulated Annealing Problem: Given a program with a pre/post conditions, discover proof of validity/invalidity. • Proof is in the form of an invariant at each program point that can be locally verified. • Key Idea: • Initialize invariants at all program points to anything. • Pick a random program point whose invariant is not locally consistent and update it to make it less inconsistent.
Simulated Annealing: Outline • Simulated Annealing • Inconsistency Measure & Penalty Function • Algorithm • Experiments
Inconsistency Measure for an Abstract Domain • Let A be an abstract domain with ) as the partial order and as the concretization function. • An inconsistency measure IM: A £ A ![0,1] satisfies: • IM(1,2) = 0 iff 1)2 • IM is monotonically decreasing in its first argument • IM is monotonically increasing in its second argument • IM is a monotonic (increasing) measure of (1) - (2) [set of states that violate 1)2]. The more strictly monotonic IM is, the more smooth it is.
Example of a Smooth Inconsistency Measure Let A be the abstract domain of Boolean formulas (with the usual implication as the partial order). Let 1´ a1Ç … Ç an in DNF and 2´ b1Æ … Æ bm in CNF IM(1, 2) = IM(ai,bj) where IM(ai,bj) = 0, if ai) bj = 1, otherwise
Penalty Function Penalty(I,) is a measure of how much inconsistent is I with respect to the invariants at neighbors of . Penalty(I,) = IM(Post(), I) + IM(I,Pre()) • Post() is the strongest postcondition of the invariants at the predecessors of at . • Pre() is the weakest precondition of the invariants at the successors of at .
Example of Penalty Function • Penalty(I, 2) = IM(Post(2), I) + IM(I, Pre(2)) 1 P s 2 • Post(2) = StrongestPost(P,s) • Pre(2) = (c ) Q) Æ (: c ) R) I c 4 3 Q R Since Post() and Pre() may not belong to A, we define: • IM(Post(), I) = Min {IM(I1,I) | I12A, I1 overapproximates Post()} • IM(I, Pre()) = Min {IM(I,I2) | I22A, I2 underapproximates Pre()}
Simulated Annealing: Outline • Simulated Annealing • Inconsistency Measure & Penalty Function • Algorithm • Experiments
Algorithm • Search for proof of validity and invalidity in parallel. • Same algorithm with different boundary conditions. • Proof of Validity • Ientry = Pre • Iexit = Post • Proof of Invalidity • IentryÆ Pre is satisfiable • Iexit = : Post • This assumes that program terminates on all inputs.
Algorithm (Continued) • Initialize invariant Ij at program point j to anything. • While penalty at some program point is not 0: • Choose j randomly s.t. Penalty(Ij, j) 0. • Update Ij s.t. Penalty(Ij,j) is minimized. • More precisely, Ij is chosen randomly with probability inversely proportional to Penalty(Ij,j).
Interesting Aspects of the Algorithm • Combination of Forward & Backward Analysis • No distinction between forward & backward information • Random Choices • Program point to update • Invariant choice
Simulated Annealing: Outline • Simulated Annealing • Inconsistency Measure & Penalty Function • Algorithm • Experiments
Example 2 x = 0 Proof of Validity y := 50; 1 2 x <100 False True 3 y = 100 x < 50 True False 6 4 x := x +1; y := y +1; x := x +1; 5 7 8
Stats: Proof vs Incremental Proof of Validity • Black: Proof of Validity • Grey: Incremental Proof of Validity • Incremental proof requires fewer updates
Stats: Different Sizes of Boolean Formulas • Grey: 5*3, Black: 4*3, White: 3*2 • n*m denotes n conjuncts & m disjuncts • Larger size requires fewer updates
* Example 3 true Proof of Validity x := 0; m := 0; 1 2 x < n False True 3 n· 0 Ç 0· m < n 4 6 m := x; 5 7 x := x +1; 8
Stats: Proof of Validity • Example 2 is “easier” than Example 1. • Easier example requires fewer updates.
Example 2: Precondition Modified true Proof of Invalidity y := 50; 1 2 x <100 False True 3 y = 100 x < 50 True False 6 4 x := x +1; y := y +1; x := x +1; 5 7 8
Conclusion • Summary • Random Interpretation: • Linear Arithmetic: Affine Joins • Uninterpreted Functions: Random Linear Interpretations • Interprocedural Analysis: Symbolic Input Variables • Simulated Annealing: • Smooth Inconsistency Measure for an abstract domain • Lessons Learned • Randomization buys efficiency and simplicity. • Randomization suggests ideas for deterministic algorithms. • Combining randomized and symbolic techniques is powerful.