1 / 53

Topics in OO, Design Patterns, Reasoning About Program Behavior ... (part 3)

Learn about Guarded Commands (GC), a notation by Dijkstra used in reasoning systems, with examples on selection, repetition, non-determinism, fairness, and reasoning rules. Explore implementation issues and reasoning system concepts in a simple manner.

carmellad
Download Presentation

Topics in OO, Design Patterns, Reasoning About Program Behavior ... (part 3)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Topics in OO, Design Patterns, Reasoning About Program Behavior ...(part 3) Neelam Soundarajan Computer Sc. & Eng. e-mail: neelam@cse

  2. Guarded Commands GC is a simple notation introduced by Dijkstra; widely used in reasoning-related work, especially in distributed systems. 1. skip 2. Assignment: x := e; 3. Sequential Composition: S1; S2

  3. Guarded Commands (contd.) 4. Selection: [b1 → S1│ b2 → S2 │ ... │ bn → Sn ]where b1, ..., bn are boolean expressions (“guards”): To execute: Pick any guard bi that evaluates to true,execute corresponding SiIf no guard evaluates to true, abort! Example: Set z to min(x, y): [ x ≤ y → z := x │ y ≤ x → z := y; ] The following may abort: [ x < y → z := x │ y < x → z := y; ]

  4. Guarded Commands (contd.) 5. Repetition: * [b1 → S1│ b2 → S2 │ ... │ bn → Sn ]To execute: Pick any bi that evaluates to true,execute corresponding Si; repeat until, after some number of iterations, all guards evaluate to false. Example: Sort x1, x2, x3, x4: * [ x1 > x2 → t := x1; x1 := x2; x2 := t; │x2 > x3 → t := x2; x2 := x3; x3 := t; │x3 > x4 → t := x3; x3 := x4; x4 := t; ] A loop can run forever but cannot abort (or can it?)

  5. Guarded Commands (contd.) Why non-determinism? Ans: Programs are goal-directed; if bi is satisfied, Si will achieve the goal; in that case, no reason to worry about bj Or: If two actions can achieve a goal, no reason to prefer one over the other Also: Can always write a determininistic program Most important: Non-determinism arises naturally. Dijkstra presents a whole series of algorithms to solve a range of problems; most of them exhibit non-determinism in a very natural manner.

  6. GC: Implementation Issues Possible implementations (choice of guards):In-orderRound robinCoin-toss? Example:y := 0; x := 0;* [ y==0 → x := x + 1 │ y==0 → y := 1 ] Does this program necessarily terminate? What about:y := random(); x := 0;* [ even(y) → x := x + 1; y := random();] What is the final value of x (if it terminates)?

  7. GC: Implementation Issues Fairness (intuitive):No guard should be ignored forever. Problem:What if the guard is enabled in every other iteration?Can it then be ignored forever? Bounded vs. unbounded fairness Unbounded fairness leads to problems in the theory. We will assume no fairness. (But see work on UNITY. Fairness plays a fundamental role in UNITY.)

  8. GC: Reasoning System (This is a continuation of material from CSE 755.) Notations: Partial correctness: {p} S {q} Total correctness: <p| S |q> A partial correctness system consists of an axiom/rule corresponding to each command in the language. skip axiom: {p} skip {p} Assignment: {p[x/e]} x := e {p}

  9. GC: Reasoning System (contd.) Sequential Composition: {p} S1 {q}, {q} S2 {r} ----------------------------------- {p} S1; S2 {r} Consequence (“logical rule”): p → p’, {p’} S {q’}, q’ → q --------------------------------------- {p} S {q}

  10. Important intuition: {p} S {q} means: Q P S GC: Reasoning System (contd.) What does <p| S |q> mean? What does the rule of consequence mean? What do consistency/completeness mean?

  11. GC: Reasoning System (contd.) Selection: {p && b1} S1 {q}, {p && b2} S2 {q}, ..., {p && bn} Sn {q} -------------------------------------------------------------------------- {p} [b1 → S1│ b2 → S2 │ ... │ bn → Sn ] {q} Repetition: {p && b1} S1 {p}, {p && b2} S2 {p}, ..., {p && bn} Sn {p} -------------------------------------------------------------------------- {p} *[b1 → S1│ b2 → S2 │ ... │ bn → Sn ] {p && ┐q1 && ┐q2 && ... && ┐qn }

  12. Notation/Terminology Operational Model: For each S: {(σ, σ’) | execution of S starting in state σ can lead to final state σ’ } Valid/Operationally valid: ╞M {p} S {q} if [{(σ, σ’) \in S} && {σ \in P}] → [σ’ \in Q] Important pitfalls: 1. [╞M {p} S {q} && σ \in P && σ’ \in Q] does not imply that starting in σ and executing S can get you to σ’. 2. {p} S {q} is a partial correctness notation.

  13. Important intuition: {p} S {q} means: Q P S GC: Reasoning System (contd.) What does <p| S |q> mean?

  14. Soundness/Completeness of Reasoning Systems Soundness (also “Consistency”): Reasoning system R is sound with respect to a given operational model M if every result derivable using R is valid in the model M. Completeness R is complete with respect to M if every result valid in M is derivable using R. ├R {p} S {q} → ╞M {p} S {q} (soundness) ╞M {p} S {q} → ├R {p} S {q} (completness)

  15. Soundness/Completeness (contd.) • Soundness in math logic is different; (note: no such thing as ┐{p} S {q}; try it!) • Completeness is somewhat similar; • Godel’s incompleteness from math logic has an impact on completeness of our system but we will work around it: relative completeness.

  16. Soundness Approach: 1. Define operational model M: For each S, define set SM of pairs (σ, σ’) such that execution of S starting in σ can lead to σ’; 2. Consider each axiom A in R. Argue that if {p} S {q} is derivable using A, and σ \in P and (σ, σ’) \in SM, then σ’ \in Q. 3. For rules: ... (Example of an unsound axiom?)

  17. Operational Model (Defn.) • skip = {(σ, σ) | σ \in Σ } • x:=e = {(σ, σ’) | σ \in Σ && σ’ = σ[x/e(σ)] } • S1; S2 = {(σ, σ’’) | \exists σ’. [(σ, σ’)\in S1 && (σ’, σ’’) \in S2]} • [b1 → S1 │ b2 → S2 │ ... │ bn → Sn ] = = {(σ, σ’) | [b1(σ) && (σ, σ’) \in S1] OR ... [bn(σ) && (σ, σ’) \in Sn] } • *[b1 → S1 │ b2 → S2 │ ... │ bn → Sn ] = = {(σ, σ’) | \exists σ0, ..., σm. [(σ = σ0) && (σm = σ’) && ⌐b1(σ’) && .. && ⌐bn(σ’) && \forall k<m.[\exists k’. bk’(σk) && (σk, σk+1)\in Sk’] ] }

  18. Proof of Soundness • Need to show: ├R {p} S {q} → ╞M {p} S {q}, i.e.: ├R {p} S {q} → [ [(σ \in P) && ((σ, σ’) \in SM)] → σ’ \in Q ] • p → p’, {p’} S {q’}, q’ → q -------------------------------------- {p} S {q} Suppose σ \in P; hence σ \in P’ (why?); suppose (σ, σ’) \in SM; hence σ’ \in Q’ (why?); hence σ’ \in Q (why?); hence rule is sound.

  19. Proof of Soundness (contd.) Consider selection: {p && b1} S1 {q}, {p && b2} S2 {q}, ..., {p && bn} Sn {q} -------------------------------------------------------------------------- {p} [b1 → S1│ b2 → S2 │ ... │ bn → Sn ] {q} Suppose σ \in P; suppose (σ, σ’) \in SM; hence there must be a k such that, bk(σ) & (σ, σ’) \in SkM (why?) hence σ’ \in Q (why?). Hence rule is sound. Others are similar. (Except assignment axiom: a bit messy!)

  20. Completeness Definitions: 1. sp(p, S) : strongest post-cond. corresponding to p, S; 2. wp(q, S): weakest pre-cond. corr. to q, S; (sometimes called weakest liberal pre-cond.) Approach: 1. For each S, define sp(p,S) (based on SM). 2. For each S, show {p} S {sp(p,S)} is derivable using axiom/rule for S. 3. In doing (2) assume completeness of rest of system.

  21. What does sp(p,S) mean? Q P S GC: Reasoning System (contd.) What does wp(q, S) mean?

  22. Completeness (contd.) Question: If we show {p} S {sp(p,S)} is derivable, how does that show completeness? How about {p} S {q} where q is not sp(p,S)? Answer: Rule of consequence. Question: What about Godel incompleteness? Ans: Rule of consequence!

  23. Strongest Post-Conditions • sp(p,skip) = {σ’ | (σ \in P) && (σ,σ’) \in skip} = p • sp(p,x:=e) = {σ’ | (σ \in P) && σ’ = σ[x/e(σ)] } • sp(p, S1;S2) = {σ’’ | (σ \in P) && \exists σ’. [(σ, σ’)\in S1 && (σ’, σ’’) \in S2]} • sp(p, [b1 → S1 │ b2 → S2 │ ... │ bn → Sn ]) = = {σ’ | \exists k. [bk(σ) && (σ, σ’) \in Sk] } • sp(p, *[b1 → S1 │ b2 → S2 │ ... │ bn → Sn ]) = = {σ’ | \exists σ0, ..., σm. [(σ0 \in P) && (σm = σ’) && ⌐b1(σ’) && .. && ⌐bn(σ’) && \forall k<m.[\exists k’. bk’(σk) && (σk, σk+1)\in Sk’] ] }

  24. Completeness (contd.) Consider Sequential Composition: {p} S1 {q}, {q} S2 {r} ----------------------------------- {p} S1; S2 {r} Need to show {p} S1; S2 {sp(p,S1;S2)} is derivable. Claim: sp(p, S1;S2) = sp(sp(p,S1), S2): Proof: Follows from definition of S1;S2. Hence take q to be sp(p, S1), r to be sp(q, S2). Others are similar.

  25. Completeness (contd.) Repetition: Introduce a new statement: **[b1 → S | ... | bn → S] = {(σ, σ’) | \exists σ0, ..., σm. [(σ = σ0) && (σm = σ’) && \forall k<m.[\exists k’. bk’(σk) && (σk, σk+1)\in Sk’] ] } Claim: *[b1 → S | ... | bn → S] = {(σ, σ’) | (σ, σ’) \in **[b1 → S | ... | b1 → S] && ⌐b1(σ’) && .. && ⌐bn(σ’) }

  26. Completeness (contd.) Rule for **: {p && b1} S1 {p}, {p && b2} S2 {p}, ..., {p && bn} Sn {p} -------------------------------------------------------------------------- {p} **[b1 → S1│ b2 → S2 │ ... │ bn → Sn ] {p} Need to show {q}**[...]{sp(q, **[...]} is derivable. Loop invariant p: {σ’ | \exists σ0, ..., σm. [(σ0 \in Q) && (σm = σ’) && \forall k<m.[\exists k’. bk’(σk) && (σk, σk+1)\in Sk’] ] } Claims: p = sp(q, **[...]); {q} **[...] {p} is derivable from the rule. Hence rule for repetition is complete.

  27. Alternative Approach Dijkstra: • “Weakest pre-condition” (for total correctness) for guarded commands; • Has general “healthiness” rules (applicable to all WP definitions); • Main problem: Can’t ignore things you are not interested in;

  28. Communicating Seq. Processes (CSP) CSP (Hoare, CACM, 1978): • Based on GC; • A program consists of a bunch of communicating processes: [P1 // P2 // ... // Pn] • Each Pi has its own set of variables; noshared var; • Only means of interaction between processes: communication; • No special synchronization mechanisms;

  29. CSP (contd.) The statements in a process Pi: • skip; • Assignment: x := e (all variables local to Pi); • Sequential Composition: S1; S2 • Output: Pj!e : evaluate e, send the value to Pj (e must contain only local variables of Pi); • Input: Pj?x : receive (wait for) a value from Pj, assign it to x, then proceed. • Input/output are executed synchronously (no other synch. mechanisms).

  30. CSP (contd.) • Selection: [g1 → S1 | g2 → S2 | ... | gm → Sm ] Three types of guards: a. Purely Boolean: e.g.: x == y b. Input guard: b; Pj?x (where b is a boolean exp) c. Ouput guard: b; Pj!e (where b is a boolean exp) • To execute: Pick a guard gk that “succeeds”, execute thei/o portion (if any), and then execute Sk.If no guard succeeds, wait until one or more does. • Original CSP: Only input guards.

  31. CSP (contd.) Examples: [P1 // P2] where: • P1 :: P2?x; P2?y; u := x+y; P2!uP2 :: P1!3; P1!4; P1?z • P1:: [P2?x → P2!1 | P2!2 → P2?x]P2 :: P1?x; P1!3 • P1 :: as in (2)P2 :: [P1?z → P1!3 | P1!4 → P1?z] • P1 :: P2?x; [x==3; P2?y → P2!1 | x==4; P2!2 → P2?x]P2 :: [true → P1!3 | true → P1!4] [P1?z → P1!5 | P1!6 → P1?z]

  32. CSP (contd.) • P1 :: [true → P2?x; P2!1 | true → P2!4; P1?z]P2 :: P1?x; P1!3; // may deadlock • P1 :: as aboveP2 :: [P1?z → P1!3 | P1!4 → P1?z] // can’t deadlock • P1 :: as aboveP2 :: [true → P1?z; P1!3 | true → P1!4; P1?z] // ?? • [P1 // P2 // P3] where:P1 :: [P2!1 → P3?x | P2?x → P3!6 ]P2 :: [P1?y → P3!2 | P1!3 → P3?y ]P2 :: [P2?z → P1!5 | P2!4 → P1?z ]

  33. CSP (contd.) • Repetition: * [g1 → S1 | g2 → S2 | ... | gm → Sm ] • To execute: Pick any guard gk that “succeeds”, execute its i/o portion (if any), and then execute Sk. • Repeat until all guards fail. • But ... what does “fail” mean? Need to avoid race conditions • A boolean guard fails if it evaluates to false; an i/o guard fails if its boolean eval’s to false, orif the other process has terminated.

  34. CSP (contd.) • P1 :: P2!1; P2!3; P2!4;P2 :: y := 0; * [P1?z → y := y+z;] • P1 :: x := 1; * [x<10; P2!x → x := x+1;]P2 :: as above • P1 :: x := 1; * [P2!x → x := x+1;]P2 :: as above • P1 :: as aboveP2 :: u := 1; y := 0; * [u<10; P1?z → u := u+1; y := y+z; ]

  35. CSP (contd.) Implementation questions: • How to implement i/o guards? (The original CSP paper had only input guards.) • How to implement distributed termination? • Fairness-related ... • Variations on CSP: a guard fails if it is not currently ready to go; ... • See papers on the course web site.

  36. Three Approaches to Reasoning about CSP Programs Apt, Francez, deRoever’s approach (based on Owicki/Gries): • When reasoning about each process, allow assumptions about the behavior of other processes; and check these assumptions during parallel composition. • Input axiom:{ p } P2?x { q } (!) • Output axiom:{ p } P2!e { q } (!!) • Parallel composition:{p1} P1 {q1}, {p2} P2 {q2} the proofs of the above cooperate with each other ---------------------------------------------------------------------------- {p1 && p2} [P1 // P2] {q1 && q2}

  37. AFd Approach to CSP Reasoning Example: P1:: P2?x; P2!(x+10) P2:: P1!10; P1?z Can derive: {true} [P1//P2] {(x=10) && (z=20)} Problem: P1 :: P2?x; P2?y P2 :: P1!10; P1!20 Can’t derive: {true} [P1 // P2] {(x=10 && (y=20)} Solution:Auxiliary variables(and invariants) The aux. var. (and inv.) can be used to keep track of which output commands match which input commands. Detail: Need to introduce bracketed sections inside which the invariant may not hold.

  38. AFd Approach (contd.) Questions: • What would happen if we omit the cooperation requirement from the par. comp. rule? • Is the AFd system complete? How do we show this?(Distributed termination of loops causes some tricky problems; but I don’t recall the details ...) Paper: By Apt, Francez, deRoever (see course site).

  39. Trace-Based Approach • Consider Pi in isolation without making assumptions about Pj. • Associate a communication trace hi with each Pi;record all of Pi’s communications on hi. • During parallel composition, require that communications between Pi and Pj, as recorded in hi, are consistent with the corresponding record in hj. (Details: See my paper on the course web site.)

  40. Trace-Based Approach (contd.) Input axiom: Pj?x ≡ (hi := hi+(Pj,Pi,k); x := k ) for some k; {\forall k. q[x/k, hi/hi+(Pj,Pi,k)] } Pj?x {q} Output axiom: Pj!e ≡ hi := hi+(Pi,Pj,e); { q[hi/hi+(Pi,Pj,e)] } Pj!e {q} Parallel composition: { pi && (hi = ε) } Pi { qi }, i = 1, ..., n ------------------------------------------------------------------- { p1 && ... && pn } [P1// ... // Pn] {q1 && .. qn && Consistent(h1, ... hn) }

  41. Trace-Based Approach (contd.) Example: P1:: P2?x; P2!(x+10) P2:: P1!10; P1?z How to derive: {true} [P1//P2] {(x=10) && (z=20)} ? Problem: P1 :: P2?x; P2?y P2 :: P1!10; P1!20 How to derive: {true} [P1 // P2] {(x=10 && (y=20)} ? Auxiliary variables ...? Invariant ...?

  42. Trace-Based Approach (contd.) Selection: (gi ≡ bi; ci where ci is skip or Pj?x or Pj!e ) { p && bi } Si { q } , i = 1, ..., n -------------------------------------------- { p } [ g1 → S1 | ... | gn → Sn ] { q } But this means: (bi;ci → Si) ≡ (bi → ci; Si) ... ? Problem with Consistency/Completeness? Operational model very different from the axiomatic view. Solution: Define a second operational model M’ that is similar in spirit to the ax. sem; show consistency/completeness with respect to M’; show equivalence of M’ and std. model.

  43. Trace-Based Approach (contd.) Problem: Can’t handle: P1 :: *[ P2?x → skip; ] P2:: *[ P1!0 → skip; ] Can’t derive: { true } [P1//P2] { false } Problem: Also can’t handle: P1 :: n1 :=0; *[ P2?x → n1 := n1+1; ] n2 :=0; *[ P2?x → n2 := n2+1; ] P2:: P1!1; P1!2 Can’t derive: { true } [P1//P2] { (n1 = 2) && (n2 = 0) }

  44. Solution: • Record, in h1, observation by P1 of P2’s termination. • Redefine the Consistent() relation appropriately. {h1=ε} P1 {\exists k, k’. [ h1[k] = h1[k’] = (P1, {P2}, σ) & \forall k” ≠ k,k’. [ h1[k”] = (P2,P1,..) & (n1 = k-1) & (n2 = (k’-k-1) ] } {“h1 has two termination elements; n1 equals no. of elements upto first; n2 equals no. of elements between first and second termination element”} {h2= ε} P2 { h2=<(P2,P1,1), (P2,P1,2)>} Hence, by parallel comp. (using Consistent()): {true} [P1//P2] { (n1=2) && (n2=0)}

  45. Trace-Based Approach (contd.) In general: • Record, in each trace, all externally observable events of the particular process. • Consistent() has to be defined in such a way as to account for all such events. • With that, system is consistent and complete (have to define M’ appropriately). But: • The assertions can be quite complex -- • Much of the work is postponed to the time of par. comp.

  46. Misra/Chandy’s Trace-Based Appr. Idea: Combine the two approaches • Use traces to record the events of a process.(Uses channel traces so Consistent() is easy to define.) • When reasoning about a process use assumptions about other processes, expressed in terms of the traces. • During parallel comp., need to validate the assumptions. But: • The nature of the assumptions is such that no need for a test of cooperation; no need to look into the proofs of the individual processes. Reference: Misra/Chandy paper on course web site; difficult to read but worth the effort.

  47. M/C Approach (contd.) Focus on: • Non-terminating processes and their invariants (over their communication traces); • Hierachical process composition; Key notation: r | P | s (where P is a process; r, s are assertions over the traces) means: • s is satisfied at the start of P; • if r is satisfied after k communications (of P), s is satsified following (k+1) communications (of P);

  48. M/C Approach (contd.) Example: Three processes P1, P2, P3 that merge four sorted input sequences to produce a sorted output sequence of integers. • P1 reads (sorted sequences) from channels c1, c2, merges the values and outputs (a sorted seq.) on d1; • P2 reads (sorted sequences) from c3, c4, merges the values and outputs (a sorted seq.) on d2; • P3 reads from d1, d2 merges the values and outputs on channel d Problem:Show that the output on d is a sorted merge of the inputs on c1, c2, c3, c4 (if the inputs on those channels are sorted).

  49. Misra/Chandy Approach (contd.) Example (contd.): P1:: c1?x; c2?y *[ x < y → d1!x; c1?x; | y < x → d1!y; c2?y; | y == x → d1!y; c1?x; c2?y; ] P2:: ... replace c1/c2/d1 in P1 by c3/c4/d2 P3:: ... replace c1/c2/d1 in P1 by d1/d2/d c1 d1 P1 c2 d P3 c3 P2 d2 c4

  50. M/C Approach (contd.) Specs of P1, P2, P3: (mi(c1) & mi(c2)) | P1 | (mi(d1) & (Z(d1) ≤ (Z(c1) U Z(c2))) (mi(c3) & mi(c4)) | P2 | (mi(d2) & (Z(d2) ≤ (Z(c3) U Z(c4))) (mi(d1) & mi(d2)) | P3 | (mi(d) & (Z(d) ≤ (Z(d1) U Z(d2))) Need to show: (mi(c1) && mi(c2) && mi(c3) && mi(c4)) | [P1//P2//P3] | (mi(d) & (Z(d) ≤ (Z(c1) U Z(c2) U Z(c3) U Z(c4)))

More Related