510 likes | 539 Views
Explore various testing techniques and strategies for software development, from unit testing to system integration. Learn about big-bang, bottom-up, and top-down integration testing approaches with practical examples and considerations. Enhance your testing skills and optimize your software quality assurance process.
E N D
Course Notes Set 10:Testing Strategies Computer Science and Software Engineering Auburn University
Strategic Approach to Testing • Testing begins at the unit level and works toward integrating the entire system • Various techniques for testing are appropriate at different times • Conducted by the developer and by independent test groups
Testing Strategies RequirementsSpecification System Testing Preliminary Design Integration Testing Detailed Design Unit Testing Coding [Adapted from Software Testing A Craftman’s Approach, by Jorgensen, CRC Press, 1995]
A Testing Strategy System engineering Requirements Design Code Unit Test Integration Test Validation Test System Test [Adapted from Software Engineering 4th Ed, by Pressman, McGraw-Hill, 1997]
Integration Testing • After individual components have passed unit testing, they are merged together to form subsystems and ultimately one complete system. • Integration testing is the process of exercising this “hierarchically accumulating” system.
Integration Testing • We will (normally) view the system as a hierarchy of components. • Call graph • Structure chart • Design tree • Integration testing can begin at the top of this hierarchy and work downward, or it can begin at the bottom of the hierarchy and work upwards. • It can also employ a combination of these two approaches.
Example Component Hierarchy A B C D E F G [Figure and associated examples adapted from Pfleeger 2001]
Integration Testing Strategies • Big-bang integration • Bottom-up Integration • Top-down Integration • Sandwich Integration
Big-bang Integration • All components are tested in isolation. • Then, the entire system is integrated in one step and testing occurs at the top level. • Often used (perhaps wrongly), particularly for small systems. • Does not scale. • Difficult or impossible to isolate faults.
A B C D E F G Big-bang Integration Test A Test B Test C Test A,B,C, D,E,F, G Test D Test E Test F
Bottom-up Integration • Test each unit at the bottom of the hierarchy first. • Then, test the components that call the previously tested ones (one layer up in the hierarchy). • Repeat until all components have been tested. • Component drivers are used to do the testing.
A B C D E F G Bottom-up Integration Test E Test B,E,F Test F Test A,B,C, D,E,F, G Test C Test G Test D,G
Bottom-up Integration • The manner in which the software was designed will influence the appropriateness of bottom-up integration. • While it is normally appropriate for object-oriented systems, bottom-up integration has disadvantages for functionally-decomposed systems: • Top-level components are usually the most important, but the last to be tested. • The upper levels are more general while the lower levels are more specific. Thus, by testing from the bottom up the discover of major faults can be delayed. • Top-level faults are more likely to reflect design errors, which should obviously be discovered as soon as possible and are likely to have wide-ranging consequences. • In timing-based systems, the timing control is usually in the top-level components.
Top-down Integration • The top-level component is tested in isolation. • Then, all the components called by the one just tested are combined and tested as a subsytem. • This is repeated until all components have been integrated and tested. • Stubs are used to fill in for components that are called but are not yet included in the testing.
A B C D E F G Top-down Integration Test A Test A,B, C,D Test A,B,C, D,E,F,G
Top-down Integration • Again, the design of the system influences the appropriateness of the integration strategy. • Top-down integration is obviously well-suited to systems that have been created through top-down design. • When major system functions are localized to components, top-down integration allows the testing to isolate one function at a time and follow its control flow from the highest levels of abstraction to the lowest levels. • Also, design problems show up earlier rather than later.
Top-down Integration • A major disadvantage is the need of stubs. • Writing stubs can be complex since they must function under the same conditions as their real counterpart. • The correctness of the stub will influence the validity of the test. • A large number of stubs could be required, particularly when there are a large number of general-purpose components in the lowest layer. • Another criticism is the lack of individual testing on interior components. • To address this concern, a modified top-down integration strategy can be used. Instead of incorporating an entire layer at once, each component in a given layer is tested individually before the integration of that layer occurs. • This introduces another problem, however: Now both stubs and component drivers are needed.
A B C D E F G Modified Top-down Integration Test B Test E Test A Test C Test A,B, C,D Test F Test A,B,C, D,E,F, G Test D Test G
Sandwich Integration • Top-down and bottom-up can be combined into what Myers calls “sandwich integration.” • The system is viewed as being composed of three major levels: the target layer in the middle, the layers above the target, and the layers below the target. • A top-down approach is used for the top level while a bottom-up approach is used for the bottom level. Testing converges on the target level.
A B C D E F G Sandwich Integration Test E Test B,E,F Test F Test D,G Test A,B,C, D,E,F, G Test G Test A
Measures for Integration Testing • Recall v(G) is an upper bound on the number of independent/basis paths in a source module • Similarly, we would like to limit the number of subtrees in a structure chart or call graph
Subtrees in Architecture vs. Paths in Units • A call graph (or equivalent) architectural representation corresponds to a design tree representation, just as the source code for a unit corresponds to a flowgraph. • Executing the design tree means it is entered at the root, modules in the subtrees are executed, and it eventually exits at the root. • Just as the program can have a finite (if it halts), but overwhelming, number of paths, a design tree can have an inordinately large number of subtrees as a result of selection and iteration. • We need a measure for design trees that is the analog of the basis set of independent paths for units.
Design Tree : Complexity of 1 1 2 3 4 5 7 6 9 8 10 12 11 13 14 [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]
Design Tree : Complexity > 1 1 2 3 4 5 7 6 9 8 10 12 11 13 14 [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]
Design Tree Notation M M M B B A A B A Possible Paths: Neither A B A B Possible Paths: A B Possible Paths: Neither A B
Subtrees vs. Paths M’s Flowgraph Design Tree C E M A B B A X [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]
Flowgraph Information • Flowgraph symbols • A black dot is a call to a subordinate module • A white dot is a sequential statement (or a collection of sequential statements) • Rules for reduction • Sequential black dot : may not be reduced • Sequential white dot : a sequential node may be reduced to a single edge • Repetitive white dots : a logical repetition without a black dot can be reduced to a single node • Conditional white dots : a logical decision with two paths without a black dot may be reduced to one path
Reduction Rules 2. Sequential White Dot 3. Repetitive White Dot 1. Sequential Black Dot 4. Conditional or Looping White Dot Decisions or [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]
Example Reduction 1 1 1 3 2 3 2 3 2 5 4 4 4 7 6 6 6 8 8 9 8 9 9 [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]
Example Reduction 1 1 1 1 3 3 3 3 4 4 4 4 6 8 8 8 9 9 9 9 [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]
Architectural Design Measures • Number of subtrees • The set of all subtrees is not particularly useful, but a basis set would be. • Module Design Complexity : iv(G) • The cyclomatic complexity of the reduced flowgraph of the module • Design Complexity: S0 • S0 of a module M is S0 = iv(Gj) j D where D is the set of descendants of M unioned with M • Note: If a module is called several times, it is added only once
Design Complexity Example S0=11 iv=3 S0=4 iv=2 S0=1 iv=1 S0=3 iv=2 S0=1 iv=1 S0=1 iv=1 S0=1 iv=1 [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]
Design Complexity Example M S0=9 iv=2 A B S0=6 iv=2 S0=1 iv=1 S0(A) = iv(A) + iv(C) + iv(D) + iv(E) C S0=4 iv=2 D E S0=1 iv=1 S0=1 iv=1 [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]
Architectural Design Measures • Integration Complexity : S1 • Measure of the number of integration tests required • S1 = S0 - n + 1 • S0 is the design complexity • n is the number of modules
Integration Complexity S1=5 S0=9 iv=3 S1=5 S0=9 iv=3 M N S0=5 iv=3 A B iv=1 S0=5 iv=3 S T iv=1 C D iv=1 iv=1 V U iv=1 iv=1 [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]
Integrated Properties of M and N S0=18 iv=3 M Integration Point S0=5 iv=3 A B S0=10 iv=1 C D iv=1 iv=1 S0=9 iv=3 N S0=5 iv=3 S T S0=1 iv=1 V U iv=1 iv=1
Integration Testing • Module integration testing • Scope is a module and its immediate subordinates • Testing Steps • Apply reduction rules to the module • Cyclomatic complexity of the subalgorithm is the module design complexity of the original algorithm. This determines the number of required tests. • The baseline method applied to the subalgorithm yields the design subtrees and the module integration tests
Integration Testing • Design integration testing • Derived from integration complexity, which quantifies a basis set of integration tests • Testing steps • Calculate iv and S0 for each module • Calculate S1 for the top module (number of basis subtrees required) • Build a path matrix (S1 x n) to establish the basis set of subtrees • Identify and label each predicate in the design tree and place those labels above each column of the path matrix corresponding to the module it influences • Apply the baseline method to the design to complete the matrix (1 : the module is executed; 0 : the module is not executed) • Identify the subtrees in the matrix and the conditions which derive the subtrees • Build corresponding test cases for each subtree
Design Integration Example S0=8 iv=2 M P1 S0=3 iv=1 A S0=4 iv=2 B P2 S0=1 iv=1 S0=1 iv=1 S0=1 iv=1 C E D S1 = S0 - n + 1 = 8 - 6 + 1 = 3 P1 : condition W = X P2 : condition Y = Z [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]
Integration Path Test Matrix [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]
Integration Path Test Matrix [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]
Example with ModuleC; use ModuleC; package body ModuleA is procedure ProcA is begin S1; if CA then S1; else ProcC; end if; end ProcA; begin null; end ModuleA; with ModuleC; use ModuleC; package body ModuleB is procedure ProcB is begin S1; if CB then ProcC; else S2; end if; if CB2 then S3; end if; end ProcB; begin null; end ModuleB; with ModuleA, ModuleB; use ModuleA, ModuleB; procedure Main is begin S1; while CM loop ProcA; ProcB; end loop; end Main; package body ModuleC is procedure ProcC is begin S1; if CC then S2; else S3; end if; end ProcC; begin null; end ModuleC; What is an appropriate number of integration test cases and what are those cases?
Example Main ProcA ProcB ProcC [Adapted from Watson and McCabe, “Structured Testing: A Testing Methodology Using the Cyclomatic Complexity Metric,” NIST 500-235, 1996]
System Testing • The primary objective of unit and integration testing is to ensure that the design has been implemented properly; that is, that the programmers wrote code to do what the designers intended. (Verification) • The primary objective of system testing is very different: We want to ensure that the system does what the customer wants it to do. (Validation) [Some notes adapted from Pfleeger 2001]
System Testing • Steps in system testing • Function Testing • Performance Testing • Acceptance Testing • Installation Testing Function Test Performance Test Acceptance Test Installation Test Delivered System Integrated Modules Functioning System Verified, Validated Software Accepted System
Function Testing • Checks that an integrated system performs its functions as specified in the requirements. • Common functional testing techniques (cause-effect graphs, boundary value analysis, etc.) used here. • View the entire system as a black box.
Performance Testing • Compares the behavior of the functionally verified system to nonfunctional requirements. • System performance is measured against the performance objectives set by the customer and expressed as nonfunctional requirements. • This may involve hardware engineers. • Since this stage and the previous constitute a complete review of requirements, the software is now considered validated.
Types of Performance Tests • Stress tests • Configuration tests • Legacy Regression tests • Security tests • Timing tests • Environmental tests • Quality tests • Recovery tests • Documentation tests • Usability tests
Acceptance Testing • Customer now leads testing and defines the cases to test. • The purpose of acceptance testing is to allow the customer and users to determine if the system that was built actually meets their needs and expectations. • Many times, the customer representative involved in requirements gathering will specify the acceptance tests.
Types of Acceptance Tests • Benchmark tests • Subset of users operate the system under a set of predefined test cases. • Pilot tests • Subset of users operate the system under normal or “everyday” situations. • Alpha testing if done at developer’s site • Beta testing if done at customer’s site • Parallel tests • New system operates in parallel with the previous version. Users gradually transition to the new system.