380 likes | 397 Views
Compositional Methods and Symbolic Model Checking. Ken McMillan Cadence Berkeley Labs. 1. Compositional methods . Reduce large verification problems to small ones by Decomposition Abstraction Specialization etc . Based on symbolic model checking System level verification.
E N D
Compositional MethodsandSymbolic Model Checking Ken McMillan Cadence Berkeley Labs 1
Compositional methods • Reduce large verification problems to small ones by • Decomposition • Abstraction • Specialization • etc. • Based on symbolic model checking • System level verification Will consider the implications of such an approach for symbolic model checking
P P M IO INTF to net host host host protocol Distributed cache coherence protocol protocol S/F network Example -- Cache coherence (Eiriksson 98) • Nondeterministic abstract model • Atomic actions • Single address abstraction • Verified coherence, etc...
protocol TABLES CAM Refinement to RTL level Abstract model host other hosts S/F network refinement relations TAGS RTL implementation (~30K lines of verilog)
Contrast to block level verification • Block verification approach to capacity problem • isolate small blocks • place ad hoc constraints on inputs • This is falsification because • constraints are not verified • block interactions not exposed to verification Result: FV does not replace any simulation activity
What are the implications for SMC? • Verification and falsification have different needs • Proof is as strong as its weakest link • Hence, approximation methods are not attractive. • Importance of predictability and metrics • Must have reliable decomposition strategies • Implications of using linear vs. branching time. p q r s t
Predictability • Require metrics that predict model checking hardness • Most important is number of state variables 1 Verification probability 0 verification falsification # state bits reduction reduction original system • Powerful MC can save steps, but is not essential • Predictability more important than capacity
32 bits 32 registers control + bypass Example -- simple pipeline • Goal: prove equivalence to unpipelined model • (modulo delay)
Direct approach by model checking • Model checking completely intractable due to large number of state variables ( > 2048 ) reference model delay ? = ops pipeline
Compositional refinement verification Abstract model Translations System
Localized verification Abstract model Translations assume prove System
Localized verification Abstract model Translations assume prove System
Circular inference rule f1 up to t -1 implies f2 up to t f2 up to t -1 implies f1 up to t always f1 and f2 SPEC (related: AL 95, AH 96)
Decomposition for simple pipeline 32 bits 32 registers control + = operand correctness = result correctness correct values from reference model
Lemmas in SMV • Operand correctness layer L1: if(stage2.valid){ stage2.opra := stage2.aux.opra; stage2.oprb := stage2.aux.oprb; stage2.res := stage2.aux.res; }
Effect of decomposition 32 bits • Bit slicing results from "cone of influence reduction" (similarly in reference model) 32 registers control + assumed correct values from reference model proved
Resulting MC performance • Operand correctness property 80 state variables 3rd order fit • Result correctness property • easy: comparison of 32 bit adders
NOT! • Previous slide showed hand picked variable order • Actually, BDD's blow up due to bad variable ordering • ordering based on topological distance
Problem with topological ordering ref. reg. file ? results = bypass logic impl. reg. file Register files should be interleaved, but this is not evident from topology
Sifting to the rescue (?) • Lessons (?) : • Cannot expect to solve PSPACE problems reliably • Need a strategy to deal with heuristic failure Note: - Log scale - High variance
Predictability and metrics • Reducing the number of state variables 1 Verification probability 0 # state bits ? 2048 bits decomposition 80 bits ~600 orders of magnitude in state space size • If heuristics fail, other reductions are available
Big structures and path splitting SPEC A P P i
Temporal case splitting • Prove separately that p holds at all times when v = i. • Path splitting record register index v i
Case split for simple pipeline • Show only correctness for operands fetched from register i forall(i in REG) subcase L1[i] of stage2.opra//L1 for stage2.aux.srca = i; • Abstract remaining registers to "bottom" • Result • 23 state bits in model • Checking one case = ~1 sec What about the 32 cases?
Exploiting symmetry • Symmetric types • Semantics invariant under permutations of type. • Enforced by type checking rules. • Symmetry reduction rule • Choose a set of representative cases under symmetry • Type REG is symmetric • One representative case is sufficient (~1 sec) • Estimated time savings from case split: 5 orders But wait, there's more...
Data type reductions • Problem: types with large ranges • Solution: reduce large (or infinite) types where T\i represents all the values in T except i. • Abstract interpretation
Type reduction for simple pipeline • Only register i is relevant • Reduce type REG to two values: using REG->{i} prove stage2.opra//L1[i]; • Number of state bits is now 11 • Verification time is now independent of register file size. Note: can also abstract out arithmetic verification using uninterpreted functions...
Effect of reduction 1 Verification probability 0 # state bits 11 84 2048 original system reduction reduction • Manual decomposition produces order of magnitude reductions in number of state bits • Inflexion point in curve crossed very rapidly
Desirata for model checking methods • Importance of predictability and metrics • Proof strategy based on reliable metric (# state bits) • Prefer reliable performance in given range to occasional success on large problems * • e.g., stabilize variable ordering • Methods that diverge unpredictably for small problems are less useful (e.g., infinite state, widening) • Moderate performance improvements are not that important • Reduction steps gain multiple orders of magnitude • Approximations not appropriate * given PSPACE completeness
CTL LTL model checking linear PSPACE compositional EXP PSPACE Linear v branching time • Model checking v compositional verification fixed model for all models • Verification complexity (in formula size) In practice, with LTL, we can mostly recover linear complexity...
Avoiding "tableau variables" • Problem: added state variables for LTL operators • Eliminating tableau variables • Push path quantifiers inward (LTL to CTL*) • Transition formulas (CTL+) • Extract transition and fairness constraints
Translating LTL to CTL* • Rewrite rules • In addition, if p is boolean, no rule By adding path quantifiers, we eliminate tableau variables
Rewrites that don't work q q p p p q p p
Examples • LTL formulas that translate to CTL formulas (note singly nested fixed point) • Incomplete rewriting (to CTL*) Note: 3 tableau variables reduced to 1 Conjecture: all resulting formulas are forward checkable
Transition modalities • Transition formulas • CTL+ state modalities where p is a transition formula • Example CTL+ formulas CTL+ still checkable in linear time
Constraint extraction • Extracting path constraints where p is a transition formula • Using rewriting and above... w/ fairness const. • Circular compositional reasoning If G, D, Q and f are transition formulas, this is in CTL+, hence complexity is linear Note: typically, G, D, Q are very large, and f is small
Effect of reducing LTL to CTL+ • In practice, tableau variables rarely needed • Thus, complexity exponential only in # of state variables • Important metric for proof strategy • Doubly nested fixed points used only where needed • I.e., when fairness constraints apply • Forward and backward traversal possible • Curious point: backward is commonly faster in refinement verification
SMC for compositional verification • Cannot expect to solve PSPACE complete problems reliably • User reductions provide fallback when heuristics fail • Robust metrics are important to proof strategy • Each user reductions gains many orders of magnitude • Modest performance improvements not very important • Exact verification is important • Must be able to handle linear time efficiently BDD's are great fun, but...