1 / 51

15. Regression testing

15. Regression testing. Tom Verheyen, Jelle Slowack , Bart Smets, Glenn Van Loon. Outline. Introduction What , why , when , how Regression faults Test automation Test suite maintenance Reducing a test suite Patterns Test All Test Risky Use Cases Test by Profile

reese
Download Presentation

15. Regression testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 15. Regressiontesting Tom Verheyen, Jelle Slowack, Bart Smets, Glenn Van Loon

  2. Outline • Introduction • What, why, when, how • Regressionfaults • Test automation • Test suite maintenance • Reducing a test suite • Patterns • Test All • Test RiskyUse Cases • Test by Profile • Test ChangedSegments • Test Within Firewall

  3. What is regressionstesting? • Baseline version: component/system passed test suite • Delta version: component/system not passed regression test • Delta build: executable configuration SUT containing baseline and delta components • Regression test case: test case baseline passed, expected to pass on delta build • Regression test suite: composition of regression test cases • Regression fault: revealed by test case that no longer passes

  4. Whyshould we regression test? Ariane 5 • First test flight: crash! • Data conversion from 64-bit floating point to 16-bit signed integer • Software (Ada) reused from Ariane 4 • No regression tests done • Estimate cost: $350 million - $2,5 billion

  5. Whencan we useregression test? • What kind of errorscanbefound? • Delta sideeffects • Delta - baseline incompatibilities • Undesirable feature interactions baseline - delta • Bad fixes • IBM: bad fixesinjectionrates 2% - 20%

  6. Whenshould we run a regression test? • When a newsubclass has been developed • When a superclass is changed • When a server class is changed • When a bug fix has been completed • When a new system build has been generated • When a newincrement is generatedfor system scope integrationtesting • When a system has stabilized and the final release build has been generated

  7. How to run a regression test? • Basic procedure samefor all situation • Removebroken test cases • Choose a regression test suite (full/reduced: seeRegressionpatterns) • Set up test configuration • Run the regression test suite • Do somethingwith the results

  8. Regressionfaults • Combination of baseline B and delta D fails • D has side effect on B • C is client/server of B • D is incompatible with B (Ariane 5!) • D’s and B’s contract specifications differ • ... (Extended list of errors: Binder p. 761-762) • Faults at intercomponent, subsystem and system scope

  9. Test automation • Manualtesting is not a goodthing • Person must reenterit and judgeresult • Ifnumber of tests and time to rerun increases: • Rerunfewer baseline test cases + focus on tests to validatefinalrequirements • Add more people to enter test cases and judgeresults (usuallynotpossible) • Time to shipdecrease  # regressions tests approaches 0

  10. Test automation: requirements • Effective automated RT requires capabilities: • Version control, compare baseline–delta results, smart comparator, ... • Want to know more? Binder p.764 • Test environment should be controllable • Some variable factors can result in less-than identical test configurations • Test environment itself, generator with clock as seed, ... • Extended list of factors: Binder p.765

  11. Test suite maintenance • A regression test suite canrapidlygrowlarge • Test decay is inevitable • Broken testcase, redundant test cases, … • Example: aerospacemanufacturer • Test suite of 165.000 test cases • +/- 90% test cases: redundant • Documentation inadequate • Somepartsnottested: 3000 new test cases  only 18.000 test cases remained

  12. Test suite maintenance: procedure • Run baseline test suite on delta build. Removebroken test cases. • Correct relevant bugs. • Merge delta component scope tests with baseline test suite. • Use a coverageanalyzer. • Rerun the test suite (to knowcoverage) • Analyze tests withsameentry-exitpaths (stillnecessary?) • If code is nottested: developnew test case. • Rerun test suite again. • Check revised baseline test suite as new baseline test suit.

  13. Reducing a test suite • Even with maintance, tests can become too large • Use reduced regression test, with selected tests • How to “safely” reduce a regression test? • Safe: all tests that could possibly exhibit different output when run on the delta build

  14. Reducing a test suite • 4 criteria • Inclusiveness: percentage of BL tests that may show regression faults and are in the RT suite • Safe RT suite is 100% inclusive • Precision: percentage BL tests in a RT that cannot reveal regression faults and are not selected in the reduced RT suite • 100 test cases: 1 cannot reach changed code  99% precise • Precise RT suite: no tests that cannot produce different output • Efficiency: cost of identifying a reduced RT suite • Generality: range of application for the selection strategy

  15. Test patterns • Retest All: default • RetestRiskyUse Cases • Retestby Profile • RetestChangedSegments • RetestWithin Firewall

  16. 1. Retest All • Rerunentire baseline test suite • Context • Any scope • Fault model • Failuresbecause of incompatibility, sideeffects, undesirable feature interaction

  17. 1. Retest All • Strategy • Test model: baseline test suite is reused • Test procedure: rerun after removing broken tests • Oracle: should be same results as previous run • Automation: see slides 12-13 • Entry criteria • Delta components pass component scope testing • Suitable baseline test suite exists • Test environment has same configuration as previous runs (see slide 13)

  18. 1. Retest All • Exit criteria • No pass test cases reveal bugs that are acceptable • Remaining test cases pass • Consequences • Inclusiveness, precision, efficiency, generality • See further slides

  19. 2. Retest Risky Use Cases • Use risk-based metrics to select partial RT suite • Context • Too few time, personnel or equipement • Fault model • Failuresbecause of incompatibility, sideeffects, undesirable feature interaction

  20. 2. Retest Risky Use Cases • Strategy • Test model: risk criteria to select test cases • Suspicious use cases: unstable, complex use cases • Critical use cases: necessary for safe, effective operation • Test procedure: Identify, develop en run test suite • Oracle: should be same results as previous run • Automation: see slides 12-13 • Entry criteria • Same as “Retest All” pattern

  21. 2. Retest Risky Use Cases • Exit criteria • Same as “Retest All” pattern • Consequences • Inclusiveness, precision, efficiency, generality • See further slides

  22. 3. Retest by profile Intent • Partial regression test • Budget-constrained Context • Too few time, personnel or equipement • Greatest dependability within budget? • Applicable at any scope

  23. 3. Retest by profile Fault model • Allocate tests by profile:frequency-based testing vs reliability Strategy: Test model • # test cases for each use case? => total budget

  24. 3. Retest by profile Example • Total budget 6000 mins • Run 1 test case 5 mins • Chance on bug reveal 0.5 % • Bug fix requires 200 mins • Baseline test suite 20 000 test cases How many of test cases (T) can we run?6000 = 5T + (0.005 x 200) 5=> T ≈ 5000

  25. 3. Retest by profile Allocation of Regression Test Time by Use Case frequency

  26. 3. Retest by profile Strategy: Test procedure • Identify, develop, run reduced test suite • Verify: critical use cases & use case variants included Strategy: Oracle • should be same results as previous run

  27. 3. Retest by profile Strategy: Automation • Slides 12-13 • Automate selection in prev. example: • 5 000 of 20 000 test cases=> ¼ test random selector(each test 25% chance of being selected) Entry & exit criteria • Same as “Retest All” pattern

  28. 3. Retest by profile Consequences • Inclusiveness, precision, efficiency, generality • See further slides

  29. 4. Retest Changed Code Intent • Partial regression test • Code-change analysis Context • Too few time, personnel or equipement • Applicable at class, cluster or subsystemscope

  30. 4. Retest Changed Code Fault model • See pattern “Retest All” Strategy: Test model • Select all baseline tests that have reached: • A changed code segment • Or a deleted code segment -> “Graph walk technique” (Rothermel and Harrold)

  31. 4. Retest Changed Code Strategy: Test model (2) • Basic model does NOT consider: • Inheritance • Dynamic binding • Data flow • Control flow • Other dependenciesarising from state-based behaviour, iteration or recursion

  32. 4. Retest Changed Code Strategy: Test procedure • Obtain a report from coverage analyzer, that lists codesegments by testcase

  33. 4. Retest Changed Code Strategy: Test procedure • Extract pairs with each segment and testcase

  34. 4. Retest Changed Code Strategy: Test procedure • Use a version control tool to generate a report on the changes between baseline & delta

  35. 4. Retest Changed Code Strategy: Test procedure • Concatenate test (step 2) & change file (step 3). Sort!

  36. 4. Retest Changed Code Strategy: Test procedure • Selection rules for tests: • Tests under same skip • Tests under delete include • Tests under changed include • Tests under new skip

  37. 4. Retest Changed Code Strategy: Oracle • Same as “Retest All” pattern Strategy: Automation • see slides 12-13 Entry & exit criteria • Same as “Retest All” pattern Consequences • See further slides

  38. 5. Retest Within Firewall Intent • Partial regression test • Code-dependency analysis Context • Too few time, personnel or equipement • Applicable at class, cluster or subsystemscope

  39. 5. Retest Within Firewall Fault model • See pattern “Retest All” Strategy: Test Model • Firewall = set of components whose test cases will be included in a regression test • Firewall set is identified by analysis of changes to each component in the SUT and its dependencies.

  40. 5. Retest Within Firewall Strategy: Test Model (2) • Each pair of components (A,B) is analyzed • Either or both may be changed: • Contract changealters external interface and/or externally visible contract(eg. alteration of public methods, ..) • Implementation changeother changes that are not visible to clients

  41. 5. Retest Within Firewall Strategy: Test Model (3) • Dependency relationship between each pair is used to select test cases • 4 Relationships: • B uses A • B is a subclass of A • B overrides A • B is a server of A

  42. 5. Retest Within Firewall Strategy: Test Model (4) • For each relationship: baseline testcase that apply to A,B or AB may be reused in one of 4 ways: Level 0 No testcases can be rerun Level 1 The state setup and test messages can be rerun. Expected results must be redeveloped. The sequence of testcases may need to be reworked. Level 2 The state setup, test messages and expected results can be rerun. The sequence of testcases may need to be reworked. Level 3 The test cases can be rerun as is.

  43. 5. Retest Within Firewall Strategy: Test Model (5) • Example • Class Account (A) and class Money (B) • Account uses a variable amount of type Money • Implementation change to account • Relationship: Money is a server of Account • ..

  44. 5. Retest Within Firewall • Level 2 reuse of the test cases for Account • (decision table for selecting regression tests 15.7 p 791)

  45. 5. Retest Within Firewall Strategy: Test procedure • Develop a dependency matrix • Apply the decision rules (table 15.7) • Rerun tests according to the testing levels

  46. 5. Retest Within Firewall Strategy: Oracle • Same as “Retest All” pattern Strategy: Automation • see slides 12-13 Entry & exit criteria • Same as “Retest All” pattern Consequences • See next slide!

  47. Pattern Comparison Inclusiveness All: Safe, all tests selected Risky: Unsafe, selection of testcases not by analysis / dependencies Profile: Unsafe, same as Risky Use Case Changed: Safe, all baseline tests that can produce a different result are selected Firewall: Safe

  48. Pattern Comparison Precision All: no tests are skipped, least precise Risky: some tests that could be skipped Profile: ditto Changed: few tests that could be skipped most precise of the white box partial regression strategies Firewall: few tests that could be skipped

  49. Pattern Comparison Efficiency All: lowest analysis & setup cost high run cost Risky: time and cost are constrained (budget) selection based on use cases (<-> implementation)=> analysis can be done without code analyzers / technical knowledge of SUT Profile: Same as Risky Use Case but if the operational profile is established, then low cost of selection analysis

  50. Pattern Comparison Efficiency (2) Changed: High setup cost Run cost = size of deltasCost can be greater than Retest All: reselection Firewall: Highest setup cost Run cost = size of firewall

More Related