1 / 19

Fault-Based Testing Techniques Overview

Understand fault-based testing principles, including mutation analysis and fault-based adequacy criteria. Explore test execution methods, scaffolding, and test oracles. Learn how mutation testing applies fault-based testing principles.

kiddd
Download Presentation

Fault-Based Testing Techniques Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. UNIT - 7 FAULT BASED TESTING,TEST EXECUTION

  2. They are : • over view • Assumption in fault based testing • Mutation Analysis • Fault based adequacy criteria • Variations on mutation analysis • Test execution • Over view • From test case specifications to test case • Scaffolding • Generic versus specific scaffolding • Test oracles • Capture and repay

  3. Overview Understand the basic ideas of fault-based testing • How knowledge of a fault model can be used to create useful tests and judge the quality of test cases • Understand the rationale of fault-based testing well enough to distinguish between valid and invalid uses • Understand mutation testing as one application of fault-based testing principles

  4. . Now, instead of a bowl of marbles, I have a program with bugs • I add 100 new bugs • Assume they are exactly like real bugs in every way • I make 100 copies of my program, each with one of my 100 new bugs • I run my test suite on the programs with seeded bugs ... • ... and the tests reveal 20 of the bugs • (the other 80 program copies do not fail) • What can I infer about my test suite?

  5. Assumptions in fault based testing • We’d like to judge effectiveness of a test suite in finding real faults, by measuring how well it finds seeded fake faults. • Valid to the extent that the seeded bugs are representative of real bugs • Not necessarily identical (e.g., black marbles are not identical to clear marbles); but the differences should not affect the selection • E.g., if I mix metal ball bearings into the marbles, and pull them out with a magnet, I don’t learn anything about how many marbles were in the bowl

  6. Mutation testing • A mutant is a copy of a program with a mutation • A mutation is a syntactic change (a seeded bug) • Example: change (i < 0) to (i <= 0) • Run test suite on all the mutant programs • A mutant is killed if it fails on at least one test case • If many mutants are killed, infer that the test suite is also effective at finding real bugs

  7. Characteristics of a Project : Every Project has Four Basic Characteristics : • Investment Pattern. • Benefits or Gains. • Time Limit & • Location. In Short, “ The Project is an Economic Activity with well defined objectives & having a specific beginning & end.” It should be amenable to Planning, Financing & Implementation as a Unit where both Costs & Returns are measurable. A well planned Project includes a Correct Consideration of Alternatives , identification of Key Issues, Compactness, enforceability etc. It should be Neat, Clear Cut & Specific.

  8. Fault based adequacy criteria • Competent programmer hypothesis: • Programs are nearly correct • Real faults are small variations from the correct program • => Mutants are reasonable models of real buggy programs • Coupling effect hypothesis: • Tests that find simple faults also find more complex faults • Even if mutants are not perfect representatives of real faults, a test suite that kills mutants is good at finding real faults too Mutation operators • Syntactic change from legal program to legal program • So: Specific to each programming language. C++ mutations don’t work for Java, Java mutations don’t work for Python • Examples: • crp: constant for constant replacement • for instance: from (x < 5) to (x < 12) • select from constants found somewhere in program text • ror: relational operator replacement • for instance: from (x <= 5) to (x < 5) • vie: variable initialization elimination • change int x =5; to int x;

  9. Variations on mutationWeak mutation • Problem: There are lots of mutants. Running each test case to completion on every mutant is expensive • Number of mutants grows with the square of program size • Approach: • Execute meta-mutant (with many seeded faults) together with original program • Mark a seeded fault as “killed” as soon as a difference in intermediate state is found • Without waiting for program completion • Restart with new mutant selection after each “kill” (c) 2007 Mauro Pezzè & Michal Young

  10. Statistical Mutation • Problem: There are lots of mutants. Running each test case on every mutant is expensive • It’s just too expensive to create N2 mutants for a program of N lines (even if we don’t run each test case separately to completion) • Approach: Just create a random sample of mutants • May be just as good for assessing a test suite • Provided we don’t design test cases to kill particular mutants (which would be like selectively picking out black marbles anyway) (c) 2007 Mauro Pezzè & Michal Young

  11. If bugs were marbles ... • We could get some nice black marbles to judge the quality of test suites • Since bugs aren’t marbles ... • Mutation testing rests on some troubling assumptions about seeded faults, which may not be statistically representative of real faults • Nonetheless ... • A model of typical or important faults is invaluable information for designing and assessing test suites

  12. Test Execution • Test Execution is the key to testing • Execution matrices and sequencing can help to improve efficiency of execution, as does the whole test planning • Test Environments need to be planned and managed • Test Data is a part of the test environment and may be fictitious or real-world • Changes in environment, data, procedures needs to be understood to mange the impacts to the test results • Test Metrics provide information to manage the testing activities • Test Reports communicate the outcome of a testing activity • Regression Testing is used to verify new releases of software • Stopping testing is accepting a level of risk and the decision to stop should be made on an estimate of that level of risk • Defect Management involves reporting, investigation, correcting and re-verifying the correction

  13. SCAFFOLDING DEFINITION • Means any temporary elevated platform (supported or suspended) and its supporting structure (including points of anchorage), used for supInspect scaffolding and components prior to each work shift • Determine feasibility of providing fall protection and access • Evaluate connections to support load and prevent swaying • Determine structural soundness when intermixing components manufacturer • Train erectors and dismantlers to recognize work hazards porting employees or materials or both.

  14. GENERIC VERSUS SPECFIC SCAFFOLDING How general should scaffolding be? – We could build a driver and stubs for each test case – ... or at least factor out some common code of the driver and test management (e.g., JUnit) – ... or further factor out some common support code, to drive a large number of test cases from data (as in DDSteps) – ... or further, generate the data automatically from a more abstract model (e.g., network traffic model) • A question of costs and re-use – Just as for other kinds of software

  15. From test specification to test case • Test design often yields test case specifications, rather than concrete data – Ex: “a large positive number”, not 420023 – Ex: “a sorted sequence, length > 2”, not “Alpha, Beta, Chi, Omega” • Other details for execution may be omitted • Generation creates concrete, executable test cases from test case specifications www.Bookspar.com | Website for Students | VTU - Notes - Question Papers

  16. Scaffolding • Test driver – A “main” program for running a test • May be produced before a “real” main program • Provides more control than the “real” main program – To driver program under test through test cases • Test stubs – Substitute for called functions/methods/objects • Test harness – Substitutes for other parts of the deployed environment • Ex: Software simulation of a hardware device

  17. TEST ORACLES • An oracle is the portion of an algorithm which can be regarded as a “black box” whose behavior can be relied upon • Theoretically, its implementation does not need to be specified • However, in practice, the implementation must be considered • Criteria for a good oracle implementation Speed Generality Feasibility

  18. Comparison and self oracles • Comparison-based oracle • With a comparison-based oracle, we need predicted output for each input – Oracle compares actual to predicted output, and reports failure if they differ • Self-Checking Code as Oracle • An oracle can also be written as self-checks – Often possible to judge correctness without predicting results • Advantages and limits: Usable with large, automaticallyine for a small number of hand-genera

  19. CAPTURE AND REPAY • Sometimes there is no alternative to human input and observation – Even if we separate testing program functionality from GUI, some testing of the GUI is required • We can at least cut repetition of human testing • Capture a manually run test case, replay it automatically – with a comparison-based test oracle: behavior same as previously accepted behavior • reusable only until a program change invalidates it • lifetime depends on abstraction level of input and output

More Related