450 likes | 547 Views
Constraint-Based Random Verification by Mutation Analysis. Speaker: Lin Hsiu -Yi Advisor: Wang Chun-Yao Date: 2011.01.04. Outline. Proposed Approach The Verification Environment Improvement The Problems of Mutation Analysis and related solutions Random Constraint Method
E N D
Constraint-Based Random Verification by Mutation Analysis Speaker: LinHsiu-Yi Advisor: Wang Chun-Yao Date: 2011.01.04
Outline • Proposed Approach • The Verification Environment Improvement • The Problems of Mutation Analysis and related solutions • Random Constraint Method • Check point insertion • The CBT Flow • Experimental Results • Future work • Introduction • Functional Verification • Verification Quality Measuring • Functional Qualification • Mutation Analysis • Background • Fundamental Hypotheses • Terminology
Functional Verification • Functional verification is the step to ensure that the specifications and/or implementations of the design at various abstraction levels are in accord with the design intent. Transformation Specification HDL Coding Functional Verification
Functional Verification • Simulation-based simulation • While formal approaches are suffered from the complexity of execution, simulation has still a dominant role in the verification of the functional correctness of electronic and embedded systems.
Verification Quality Measuring (1/2) • Completeness Problem • “When can one claim that the verification is complete?” This is a perpetual and still unanswerable question. • Exhaustive simulation is impossible • Considering a logic block with 64-inputs, the combinatorial possibilities for its input space reach 16 x 10 18 billion. It is impossible to simulating them within a reasonable time.
Verification Quality Measuring (2/2) • Coverage Metric • To avoid simulating all tests exhaustively, coverage metric are developed to guide the selection of tests for simulation. • Coverage metrics can be classified into two categories • Structural coverage • Functional coverage
Structure coverage & Functional coverage • Structural Coverage (code coverage) • Most of them only concern the design code is activated, they don’t guarantee the design bugs will be propagated and detected. • Functional Coverage • It usually involves the interpretation of functionality and the related measurements from the specification. However, The extra effort to write the test plan is a major drawback. In addition, the implementation is subjective, and it is no way to define completion.
Functional Qualification • The limitation to the coverage metric • Because of the drawbacks of Structure & Functional coverage, we need a adequate metrics to track the progress of verification. • A new verification technique known as functional qualification addresses this problem. • Fault-based verification technique • To apply functional qualification, we can use fault-based verification technique, and consider all three portions of the verification process – activation, propagation, and detection.
The Relationship between Design, Verification and Qualification Functional Qualification Functional Verification Design
Fault-based Verification (1/2) • In manufacturing stage • Stuck-at fault model at gate-level is used to guide the selection of product test data for exposing defects during the manufacturing processes. • In design stage • Theoretically, we can translate higher level faults to gate-level then apply ATPG methods to generate the test data. However, this mapping imposes high complexity and inefficiency, especially with complex design.
Fault-based Verification (2/2) • OCCOM ( Observability-Based Code Coverage Metric ) • By F. Fallah- DAC1998 • Compute the probability that an effect of the fault would be propagated. • Insert tags into the design when simulating, and observe the propagation result. • Show the problem with coupling code coverage and consider the propagation ability.
Outline • Proposed Approach • The Verification Environment Improvement • The Problems of Mutation Analysis and related solutions • Random Constraint Method • Check point insertion • The CBT Flow • Experimental Results • Future work • Introduction • Functional Verification • Verification Quality Measuring • Functional Qualification • Mutation Analysis • Background • Fundamental Hypotheses • Terminology
Mutation Analysis (1/3) • Mutation Analysis (MA) is a fault-based test data selection technique. • History • Mutation Analysis can be sued for testing software. The seminal paper introducing mutation analysis was written by R. A. DeMillo, R. J. Lipton, and F. G. Sayward in 1978.
Mutation Analysis (2/3) • Implementation • Analyze small behavioral change to a software program, and intend to find weakness in the functional testing of a program. • Similar to manufacturing test • Looks for a change in values seen on an output • Define a fault model which is the analogous to designer errors.
Mutation Analysis (3/3) • Problem • The number of such potential faults for a given program is enormous; it is impossible to generate mutants representing all of them.
Fundamental Hypotheses of MA • Mutation Analysis targets only a subset of these faults, with the hope that these will be sufficient to simulate all faults. This theory is based on two hypotheses: • Competent Programmer Hypothesis (CPH) • The programmers or engineers write code that is close to being correct. • Coupling Effect • A test that distinguishes the good version from all its mutants is also sensitive to more complex errors.
Terminology • Mutant • The “small change” we make in the code (as the “fault” we know in ATPG). • Kill • If we detect the error in the simulation result, it is said we kill the mutant. • Living Mutants • After we tested all verification data, the mutants that still existed are living mutants • Living Mutants show that there are some weaknesses in our test pattern.
Terminology • Non-Activated Mutants (NAM) • No stimulus reaches the mutation. • Non-Propagated Mutants (NPM) • The mutants will be activated and generate the error events. These error events cannot be propagated to the observation point (ex: assertion checkers) in the testbench. • Non-Detected Mutants (NDM) • The error events have been propagated to the observation point, but cannot be detect by the checker.
Mutation Example • Original program code a = b or c Mutated program a = b and c use “and” to replace “or” is a mutant • Check whether we catch the difference in the simulation result
Outline • Proposed Approach • The Verification Environment Improvement • The Problems of Mutation Analysis and related solutions • Random Constraint Method • Check point insertion • The CBT Flow • Experimental Results • Future work • Introduction • Functional Verification • Verification Quality Measuring • Functional Qualification • Mutation Analysis • Background • Fundamental Hypotheses • Terminology
The Verification Environment Improvement by Mutation Analysis • Final goal • Identify the weakness in the verification environment, then generate the test data and improve the checker to modify them. • Experimental goal • Kill the mutants as many as possible.
The Problem of Mutation Analysis (1/2) • High computational cost of executing the enormous number of mutants against a test set. • Considering a design under test with L lines and M mutation operators. Assuming the simulation cost linear to the design size, We need O(M * L) = O(L2) to simulate the DUV for each test case. Now we have T test cases, so we can evaluate the total cost is O (T * L2)
The Problem of Mutation Analysis (2/2) • It is related to the amount of human effort involved in using Mutation Analysis. • Analyze the result • To clarify the reason for the living mutants • Equivalent mutants • It is similar to the “redundant fault” of manufacturing fault model.
Outline • Proposed Approach • The Verification Environment Improvement • The Problems of Mutation Analysis and related solutions • Random Constraint Method • Check point insertion • The CBT Flow • Experimental Results • Future work • Introduction • Functional Verification • Verification Quality Measuring • Functional Qualification • Mutation Analysis • Background • Fundamental Hypotheses • Terminology
Cost Reducing Techniques • Most cost reduction techniques have been proposed. These techniques are classified into two types • Reduction of the generated mutants • Reduction of execution cost.
Mutant Reduction Techniques • Mutant Sampling (1980) • Choose a small subset of mutants from the entire set. • Mutant Clustering (2008) • Chooses a subset of mutants using clustering algorithm. • Selective Mutation (1990) • Seek to find a small set of mutation operators that generate a subset of all possible mutants than others. • Higher Order Mutation (2008) • First order mutants(FOMs) vs.. higher order mutants(HOMs)
Execution Cost Reduction Techniques (1/2) • Strong Mutation Weak and Firm Mutation • Strong: for a given program p, a mutant m of program p is said to be killed only if mutant m gives a different output from the original program p. • Weak: a program p is assumed to be constructed from a set of components C = {c1, ..., cn}. Suppose mutant m is made by changing component cm, mutant m is said to be killed if any execution of component cm is different from mutant m.
Execution Cost Reduction Techniques (2/2) • Firm Mutation: the “compare state” of Firm Mutation lies between the intermediate states after execution (Weak Mutation) and the final output (Strong Mutation). • Except the different types of mutation, there are other important reduction techniques such as Compiler-Based Technique or Schema Generation. But these techniques are related to the run-time optimization, we won’t concern them in our topic.
Our Improvement Method • Constrained Random Test Generation • Generate the test data quickly without the human effort. • Dynamic adjust the potential constraints according the range we concern. • Check point insertion • Increase the observabiliy of the design by adding the observation points, and identify the faulty mutation program as early as possible. • Define the range of constraint’s range. (similar to the firm mutation) • Implemented by Testability analysis
Constraint-Based Test Data Generation (CBT) • CBT is based on the observation that a test case that kills a mutant must satisfy three conditions. • Reachability condition: for activation • Necessity condition: for weak propagation • Sufficiency condition: for propagation
Necessity Constraint assign Max = Max * in_C; Always * begin if ( in_A > in_B) @ if ( true) constraint (in_A< in_B) Max = in_A; else Max = in_B; end endmodule Module (in_A, in_B, in_C, Max); input in_A, in_B, in_C; output Max; reg [3:0] Max; reg[3:0] in_A; reg[3:0] in_B; assign Max = Max * in_C; Always * begin if ( in_A > in_B) Max = in_A; else Max = in_B; end endmodule
Predicate Constraint • Satisfying the sufficiency condition. • Bring the incorrect states to the check point.
Predicate Constraint Example assign Max = Max * in_C; constraint in_C!=0 assign Ans = task(Max); Always * begin if ( in_A > in_B) @ if ( true) constraint (in_A<= in_B) Max = in_A; else Max = in_B; end endmodule Module (in_A, in_B, in_C, Ans); input in_A, in_B, in_C; output Ans; reg [3:0] Max; reg[3:0] in_A; reg[3:0] in_B; assign Max = Max * in_C; assign Ans = task(Max); Always * begin if ( in_A > in_B) Max = in_A; else Max = in_B; end endmodule
Necessity Constraint & Predicate Constraints • Necessity Constraint can defined by the types of the mutants. • Finding all predicate constraints is hard. But we can define a range of searching space by check point.
Check Point Insertion • Define the domain to adding constraints. • Apply the testability technique and identify the location with low testability.
Check point Insertion Example assign Max = Max * in_C; constraint in_C!=0 Find out the low testability point here insert the check point in “MAX” assign Ans = task(Max); Always * begin if ( in_A > in_B) @ if ( true) constraint (in_A<= in_B) Max = in_A; else Max = in_B; end endmodule Module (in_A, in_B, in_C, Ans); input in_A, in_B, in_C; output Ans; reg [3:0] Max; reg[3:0] in_A; reg[3:0] in_B; assign Max = Max * in_C; assign Ans = task(Max); Always * begin if ( in_A > in_B) Max = in_A; else Max = in_B; end endmodule
The CBT Flow for Mutation Analysis Testability analysis(TA) Fully Random test data generation Define the location for check points according to the TA result Mutation Analysis & kill easy-to-kill mutants Mutation Analysis & kill hard-to-kill mutants Constraint-Base Random test data generation
Outline • Proposed Approach • The Verification Environment Improvement • The Problems of Mutation Analysis and related solutions • Random Constraint Method • Check point insertion • The CBT Flow • Experimental Results • Future work • Introduction • Functional Verification • Verification Quality Measuring • Functional Qualification • Mutation Analysis • Background • Fundamental Hypotheses • Terminology
Design Under Verification • Router • 16 x 16 crosspoint switch • Primary Input • Din [15:0] • Frame_n [15:0] • Valid_n[15:0] • Reset_n • clock • Primary Output • Dout [15:0] • Frameo_n [15:] • Valido_n [15:0]
Testbaench • Random input pattern • Language: System verilog • Package count: 3000 • Package structure
Outline • Proposed Approach • The Verification Environment Improvement • The Problems of Mutation Analysis and related solutions • Random Constraint Method • Check point insertion • The CBT Flow • Experimental Results • Future work • Introduction • Functional Verification • Verification Quality Measuring • Functional Qualification • Mutation Analysis • Background • Fundamental Hypotheses • Terminology
Future Work • Combine the testability analysis technique with the verification environment. • Writing the constraint-based random generation program.