1 / 38

Software Engineering

Software Engineering. Testing (Concepts and Principles) James Gain ( jgain@cs.uct.ac.za ) http://people.cs.uct.ac.za/~jgain/courses/SoftEng/. Objectives. Introduce the concepts and principles of testing Summarize the testing process Consider a variety of testing and debugging methods:

lyris
Download Presentation

Software Engineering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Engineering Testing (Concepts and Principles) James Gain (jgain@cs.uct.ac.za) http://people.cs.uct.ac.za/~jgain/courses/SoftEng/

  2. Objectives • Introduce the concepts and principles of testing • Summarize the testing process • Consider a variety of testing and debugging methods: • White Box • Black Box • Debugging analysis design code test

  3. Software Testing • Narrow View (unit level): • Exercise a program with the specific intent of finding errors prior to delivery to the end user • A good test case is one that has a high probability of finding an as-yet-undiscovered error • A successful test is one that uncovers an as-yet-undiscovered error • Broad View (acceptance level): • The process used to ensure that the software conforms to its specification and meets the user requirements • Validation “Are we building the right product?” • Verification “Are we building the product right?” • Takes place at all stages of Software Engineering

  4. errors requirements conformance performance an indication of quality What Testing Shows

  5. Testing Principles • All tests should be traceable to customer requirements • Tests should be planned long before testing begins • Pareto Principle: 80% of errors occur in 20% of classes • Testing should begin “in the small” and progress toward testing “in the large” • Exhaustive testing is not possible • To be most effective, testing should be conducted by an independent third party

  6. Who Tests the Software? developer independent tester Understands the system but will test “gently” and is driven by delivery Must learn about the system but will attempt to break it and is driven by quality

  7. Features of Testable Software • Operability • “The better it works, the more efficiently it can be tested” • Bugs are easier to find in software which at least executes • Observability • “What you see is what you test” • The results of each test case should be readily observed • Controlability • “The better we can control the software, the more testing can be automated and optimized” • Easier to set up test cases • Decomposability • “By controlling the scope of testing, we can more quickly isolate problems and perform smarter retesting” • Testing can be targeted

  8. More Testability Features • Simplicity • “The less there is to test, the more quickly we can test it” • Reduce complex architecture and logic to simplify tests • Stability • “The fewer the changes the fewer the disruptions to testing” • Changes disrupt test cases • Understandability • “The more information we have the smarter we will test”

  9. Test Case Design • A test case is a controlled experiment that tests the system • Process: • Objective—to uncover errors • Criteria—in a complete manner • Constraints—with a minimum of effort and time • Often badly designed in an ad hoc fashion • “Bugs lurk in corners and congregate at boundaries.” Good test case design applies this maxim

  10. Exhaustive Testing (infeasible) Two nested loops containing four if..then..else statements. Each loop can execute up to 20 times There are 10^14 possible paths! If we execute one test per millisecond, it would take 3170 years to test this program

  11. Selective Testing (feasible) Test a carefully selected execution path. Cannot be comprehensive

  12. Testing Methods • Black Box: examines fundamental interface aspects without regard to internal structure • White (Glass) Box: closely examine the internal procedural detail of system components • Debugging: fixing errors identified during testing white-box methods black-box methods Methods Strategies

  13. [1] White-Box Testing • Goal: • Ensure that all statements and conditions have been executed at least once • Derive test cases that: • Exercise all independent execution paths • Exercise all logical decisions on both true and false sides • Execute all loops at their boundaries and within operational bounds • Exercise internal data structures to ensure validity

  14. Why Cover all Paths? • Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. • We often believe that a logical path is not likely to be executed when, in fact, it may be executed on a regular basis • Typographical error are random; it is likely that untested paths will contain some

  15. Sequence While If Basis Path Testing • Provides a measure of the logical complexity of a method and provides a guide for defining a basis set of execution paths • Represent control flow using flow graph notation • Nodes represent processing, arrows represent control flow

  16. Flow Graphs: Compound Conditions • Separate nodes are created for each arm of a compound condition (e.g. a and b are separate nodes in the condition if(a && b)) • Example: • if(a || b) • { x(); • } • else • { y(); • } • z(); a b y x z

  17. Cyclomatic Complexity • Compute the cyclomatic complexity V(G) of a flow graph G: • Number of simple predicates (decisions) + 1 or • V(G) = E-N+2 (where E are edges and N are nodes) or • Number of enclosed areas + 1 • In this case V(G) = 4

  18. modules V(G) modules in this range are more error prone Cyclomatic Complexity and Errors • A number of industry studies have indicated that the higher V(G), the higher the probability of errors

  19. Basis Path Testing • V(G) is the number of linearly independent paths through the program (each has at least one edge not covered by any other path) • Derive a basis set of V(G) independent paths • Path 1: 1-2-3-8 • Path 2: 1-2-3-8-1-2-3-8 • Path 3: 1-2-4-5-7-8 • Path 4: 1-2-4-6-7-8 • Prepare test cases that will force the execution of each path in the basis set 1 2 4 3 5 6 7 8

  20. Basis Path Tips • You don’t need a flow graph, but it helps in tracing program paths • Count each simple logical test, compound tests (e.g. switch statements) count as 2 or more • Basis path testing should be applied to critical modules only • When preparing test cases use boundary values for the conditions

  21. Exercise: Basis Path Testing • Exam Question 2001: • Draw the flow graph, calculate the cyclomatic complexity, list the basis paths and prepare a test case for one path using the following C++ code fragment: • while(value[i] != -999.0 && totinputs < 100) • { totinputs++; • if(value[i] >= min && value[i] <= max) • { totvalid++; • sum = sum + value[i]; • } • i++; • }

  22. Solution: Flow Graph 1 2 • while(value[i] != -999.0 && totinputs < 100) • { totinputs++; • if(value[i] >= min && value[i] <= max) • { totvalid++; • sum = sum + value[i]; • } • i++; • } 3 4 5 6 7 1 3 6 7 4 5 2

  23. Solution: Cyclomatic Complexity • Cyclomatic Complexity: • V(G) = number of enclosed areas + 1 = 5 • V(G) = number of simple predicates + 1 = 5 • V(G) = edges - nodes + 2 = 10 - 7 + 2 = 5 1 3 6 7 4 5 2

  24. Solution: Basis Paths • 1-7 (value[i] = -999.0) • 1-2-7 (value[i] = 0, totinputs = 100) • 1-2-3-6-1-7 • 1-2-3-4-6-1-7 • 1-2-3-4-5-6-1-7 1 3 6 7 4 5 2

  25. Non-Planar Flow Graphs • if statements nested in a switch statement. • What is V(G)? s c1 c2 c3 m3 m1 m2 4 e

  26. Other White Box Methods • Condition Testing: exercises the logical (boolean) conditions in a program • Data Flow Testing: selects test paths according to the location of the definition and use of variables in a program • Loop Testing: focuses on the validity of loop constructs

  27. Loop Testing Simple loop Nested Loops Concatenated Loops Unstructured Loops

  28. Simple Loops • Test cases for simple loops: • Skip the loop entirely • Only one pass through the loop • Two passes through the loop • m passes through the loop (m < n) • (n-1), n and (n+1) passes through the loop • Where n is the maximum number of allowable passes

  29. Nested Loops • Test cases for nested loops: • Start at the innermost loop. Set all the outer loops to their minimum iteration parameter values • Test the min+1, typical, max-1 and max for the innermost loop, while holding the outer loops at their minimum values • Move out one loop and set it up as in step 2, holding all other loops at typical values. Continue this step until the outermost loop has been tested • Test cases for concatenated loops: • If the loops are independent of one another then treat each as a simple loop, otherwise treat as nested loops

  30. [2] Black-Box Testing • Complementary to white box testing. Derive external conditions that fully exercise all functional requirements requirements output input events

  31. Black Box Strengths • Attempts to find errors in the following categories: • Incorrect or missing functions • Interface errors • Errors in data structures or external database access • Behaviour or performance errors • Initialization or termination errors • Black box testing is performed during later stages of testing • There are a variety of black box techniques: • comparison testing (develop independent versions of the system), • orthogonal array testing (sampling of an input domain which has several variables)

  32. Black Box Methods • Equivalence Partitioning: • Divide input domain into classes of data. • Each test case then uncovers whole classes of errors. • Examples: valid data (user supplied commands, file names, graphical data (e.g., mouse picks)), invalid data (data outside bounds of the program, physically impossible data, proper value supplied in wrong place) • Boundary Value Analysis: • More errors tend to occur at the boundaries of the input domain • Select test cases that exercises bounding values • Examples: an input condition specifies a range bounded by values a and b. Test cases should be designed with values a and b and just above and below a and b

  33. [3] Debugging execution of cases test cases • Testing is a structured process that identifies an error’s “symptoms” • Debugging is a diagnostic process that identifies an error’s “cuase” results new test cases suspected causes Debugging regression tests identified causes corrections

  34. Debugging Effort time required to correct the error and conduct regression tests time required to diagnose the symptom and determine the cause Definition (Regression Tests): re-execution of a subset of test cases to ensure that changes do not have unintended side effects

  35. Symptoms and Causes symptom and cause may be geographically separated symptom may disappear when another problem is fixed cause may be due to a combination of non-errors cause may be due to a system or compiler error cause may be due to symptom assumptions that everyone cause believes symptom may be intermittent

  36. Not all bugs are equal infectious damage catastrophic extreme serious disturbing annoying mild Bug Type Bug Categories: function-related bugs, system-related bugs, data bugs, coding bugs, design bugs, documentation bugs, standards violations, etc.

  37. Debugging Techniques • Brute Force: • Use when all else fails. • Memory dumps and run-time traces. • Mass of information amongst which the error may be found • Backtracking: • Works in small programs where there are few backward paths • Trace the source code backwards from the error to the source • Cause Elimination: • Create a set of “cause hypotheses” • Use error data (or further tests) to prove or disprove these hypotheses • But debugging is an art. Some people have innate prowess and others don’t

  38. Debugging Tips • Don’t immediately dive into the code, think about the symptom you are seeing • Use tools (e.g. dynamic debuggers) to gain further insight • If you are stuck, get help from someone else • Ask these questions before “fixing” the bug: • Is the cause of the bug reproduced in another part of the program? • What bug might be introduced by the fix? • What could have been done to fix the bug in the first place? • Be absolutely sure to conduct regression tests when you do “fix” the bug

More Related