1 / 23

Predicate Learning and Selective Theory Deduction for Solving Difference Logic

Predicate Learning and Selective Theory Deduction for Solving Difference Logic. Chao Wang , Aarti Gupta, Malay Ganai NEC Laboratories America Princeton, New Jersey, USA August 21, 2006. Presentation-only: for more info. please check [Wang et al LPAR’05] and [Wang et al DAC’06].

nuncio
Download Presentation

Predicate Learning and Selective Theory Deduction for Solving Difference Logic

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Predicate Learning and Selective Theory Deduction for Solving Difference Logic Chao Wang, Aarti Gupta, Malay Ganai NEC Laboratories America Princeton, New Jersey, USA August 21, 2006 Presentation-only: for more info. please check [Wang et al LPAR’05] and [Wang et al DAC’06]

  2. Difference Logic • Logic to model systems at the “word-level” • Subset of quantifier-free first order logic • Boolean connectives + predicates like (x – y ≤ c) • Formal verification applications • Pipelined processors, timed systems, embedded software • e.g., back-end of the UCLID Verifier • Existing solvers • Eager approach [Strichman et al. 02], [Talupur et al. 04], UCLID • Lazy approach TSAT++, MathSAT, DPLL(T), Saten, SLICE, Yices, HTP, … • Hybrid approach [Seshia et al. 03], UCLID, SD-SAT

  3. Our contribution • Lessons learned from previous works • Incremental conflict detection and zero-cost theory backtracking[Wang et al. LPAR’05] • Exhaustive theory deduction[Nieuwenhuis & Oliveras CAV’05] • Eager chordal transitivity constraints[Strichman et al. FMCAD’02] • What’s new? • Incremental conflict detection PLUSselective theory deduction • with little additional cost • Dynamic predicate learning to combat exponential blow-up

  4. Outline • Preliminaries • Selective theory (implication) deduction • Dynamic predicate learning • Experiments • Conclusions

  5. A:2 x y C:3 D:10 ¬B:-7 w z Preliminaries Difference logic formula Difference predicates Boolean skeleton Constraint graph for assignment (A,¬B,C,D) A: ( x – y ≤ 2 ), B: ( z – x ≤ -7 ) C: ( y - z ≤ 3 ), D: ( w - y ≤ 10 ) A: ( x – y ≤ 2 ) ¬ B: ( z – x ≤ -7 ) C: ( y - z ≤ 3 ) D: ( w - y ≤ 10 )

  6. A:2 C:3 ¬B:-7 A:2 x y D:10 C:3 ¬B:-7 w z Lemma learned: (¬A + B + ¬C) Theory conflict: infeasible Boolean assignment • Negative weighted cycle  Theory conflict • Theory conflict  Lemma or blocking clause Boolean conflict A: ( x – y ≤ 2 ) ¬ B: ( z – x ≤ -7 ) C: ( y - z ≤ 3 ) D: ( w - y ≤ 10 ) Conflicting clause: (false + false + false)

  7. A:2 x y D:10 C:3 C:3 ¬B:-7 w Theory implication: A ^ ¬B →(¬C) z Implied Boolean assignment  trigger a series of BCP Theory implication: implied Boolean assignment • If adding an edge creates a negative cycle  negated edge is implied • Theory implication  var assignment  Boolean implication (BCP) A: ( x – y ≤ 2 ) ¬ B: ( z – x ≤ -7 ) C: ( y - z ≤ 3 ) D: ( w - y ≤ 10 )

  8. Negative cycle detection • Called repeatedly to solve many similar subproblems • For conflict detection (incremental, efficient) • For implication deduction (often expensive) • Incremental detection versus exhaustive deduction • SLICE: Incremental cycle detection -- O(n log n) • DPLL(T): Exhaustive theory deduction -- O(n * m)

  9. Data from [LPAR’05]: Comparing SLICE solver(SMT benchmarks repository,as of 08-2005) vs. UCLID vs. MathSAT vs. ICS 2.0 vs. DPLL(T) – Barcelogic vs. DPLL(T)–B (linear scale) vs. TSAT++ Points above the diagonals  Wins for SLICE solver

  10. From the previous results • We have learned that • Incremental conflict detection  more scalable • Exhaustive theory deduction  also helpful • Can we combine their relative strengths? • Our new solution • Incremental conflict detection (SLICE) • Zero-cost theory backtracking (SLICE) • PLUSselective theory deductionwith O(n) cost

  11. Outline • Preliminaries • Selective theory (implication) deduction • Dynamic predicate learning • Experiments • Conclusions

  12. Constraint Propagation Theory Constraint Propagation Deduce() { while (implications.empty()) { set_var_value(implications.pop()); if (detect_conflict()) return CONFLICT; add_new_implications(); if ( ready_for_theory_propagation() ) { if (theory_detect_conflict()) return CONFLICT; theory_add_new_implications(); } } } Boolean CP (BCP)

  13. X -2 X -4 2 (z,y)  d[y]=-4 pi[y]=z (y,w)  d[w]=6 pi[w]=y 3 (y,x)  d[x]=-2 pi[x]=y 10 (x,z)  d[z]=-9 pi[z]=x -7 (z,y)  CONFLICT !!! X -9 X 6 [Ramalingam 1999] [Bozzano et al. 2005] [Cotton 2005] [Wang et al. LPAR’05] Incremental conflict detection Relax Edge (u,v): if ( d[v] > d[u]+w[u,v] ) { d[v] = d[u]+w[u,v]; pi[v] = u} 0 0 x y w z -7 0 • Add an edge  relax, relax, relax, … • Remove an edge  do nothing (zero-cost backtracking in SLICE)

  14. d[z] - d[y] <= w[y,z]  Edge (y,z) is an implied assignment FWD: Pre(y) = {y} Post(x) = {x,z,…} Significantly cheaper than exhaustive theory deduction Both: Pre(y) = {y,w,…} Post(x) = {x,z,…} Selective theory deduction Post(x) = {x, z, … } Pre(y) = {y, w, … } z y w x through relax Pi[y] = w

  15. Outline • Preliminaries • Selective theory (implication) deduction • Dynamic predicate learning • Experiments • Conclusions

  16. E2 E3 E1 Diamonds: with O(2^n) negative cycles e1 e2 -1 e0 • Observations: • With existing predicates (e1,e2,…)  exponential number of lemmas • Add new predicates (E1,E2,E3) and dummies (E1+!E1) & (E2+!E2) & … •  almost linear number of lemmas • Previous eager chordal transitivity used by [Strichmann et al. FMCAD’02]

  17. E3: x – y <= (d[x] - d[y]) Add new predicates to reduce lemmas z x y w • Heuristics to choose GOOD predicates (short-cuts) • Nodes that show up frequently in negative cycles • Nodes that are re-convergence points of the graph • (Conceptually) adding a dummy constraint (E3 + ! E3) Predicates: E1: x – y < 5 E2: y – x < 5 Lemma: ( ! E1 + ! E2 )

  18. Experiments with SLICE+ • Implemented upon SLICE • i.e., [Wang et al. LPAR’05] • Controlled experiments • Flexible theory propagation invocation • Per predicate assignment, per BCP, or per full assignment • Selective theory deduction • No deduction, Fwd-only, or Both-directions • Dynamic predicate learning • With, or Without

  19. When to call the theory solver? On the DTP benchmark suite per BCP versus per predicate assignment per BCP versus per full assignment Points above the diagonals  Wins for per BCP

  20. Comparing theory deduction schemes On the DTP benchmark suite Fwd-only deduction vs. no deduction total 660 seconds Both-directions vs. no deduction total 1138 seconds Points above the diagonals  Wins for no deduction

  21. Comparing dynamic predicate learning On the diamonds benchmark suite

  22. Comparing dynamic predicate learning On the DTP benchmark suite Dyn. pred. learning vs. No pred. learning Points above the diagonals  Wins for No pred. learning

  23. Lessons learned • Timing to invoke theory solver • “after every BCP finishes” gives the best performance • Selective implication deduction • Little added cost, but improves the performance significantly • Dynamic predicate learning • Reduces the exponential blow-up in certain examples • In the spirit of “predicate abstraction” Questions ?

More Related