670 likes | 758 Views
Chapter 11, Testing. Terminology. Failure : Any deviation of the observed behavior from the specified behavior Erroneous state ( error ): The system is in a state such that further processing by the system can lead to a failure
E N D
Terminology • Failure: Any deviation of the observed behavior from the specified behavior • Erroneous state (error): The system is in a state such that further processing by the system can lead to a failure • Fault: The mechanical or algorithmic cause of an error (“bug”) • Validation: Activity of checking for deviations between the observed behavior of a system and its specification.
Faults in the Interface specification Mismatch between what the client needs and what the server offers Mismatch between requirements and implementation Algorithmic Faults Missing initialization Incorrect branching condition Missing test for null Mechanical Faults (very hard to find) Operating temperature outside of equipment specification Errors Null reference errors Concurrency errors Exceptions. Examples of Faults and Errors
Another View on How to Deal with Faults • Fault avoidance • Use methodology to reduce complexity • Use configuration management to prevent inconsistency • Apply verification to prevent algorithmic faults • Use Reviews • Fault detection • Testing: Activity to provoke failures in a planned way • Debugging: Find and remove the cause (Faults) of an observed failure • Monitoring: Deliver information about state => Used during debugging • Fault tolerance • Exception handling • Modular redundancy.
Taxonomy for Fault Handling Techniques Fault Handling Fault Avoidance Fault Detection Fault Tolerance Methodology ConfigurationManagement AtomicTransactions ModularRedundancy Verification Testing Debugging Unit Testing IntegrationTesting System Testing
Observations • It is impossible to completely test any nontrivial module or system • Practical limitations: Complete testing is prohibitive in time and cost • Theoretical limitations: e.g. Halting problem • “Testing can only show the presence of bugs, not their absence” (Dijkstra). • Testing is not for free => Define your goals and priorities
Testing takes creativity • To develop an effective test, one must have: • Detailed understanding of the system • Application and solution domain knowledge • Knowledge of the testing techniques • Skill to apply these techniques • Testing is done best by independent testers • We often develop a certain mental attitude that the program should behave in a certain way when in fact it does not • Programmers often stick to the data set that makes the program work • A program often does not work when tried by somebody else.
Testing Activities UnitTesting IntegrationTesting SystemTesting AcceptanceTesting ObjectDesign Document SystemDesign Document RequirementsAnalysis Document Client Expectation UnitTesting IntegrationTesting SystemTesting AcceptanceTesting Developer Client
Types of Testing UnitTesting IntegrationTesting SystemTesting AcceptanceTesting • Unit Testing • Individual component (class or subsystem) • Carried out by developers • Goal: Confirm that the component or subsystem is correctly coded and carries out the intended functionality • Integration Testing • Groups of subsystems (collection of subsystems) and eventually the entire system • Carried out by developers • Goal: Test the interfaces among the subsystems.
Types of Testing continued... UnitTesting IntegrationTesting SystemTesting AcceptanceTesting • System Testing • The entire system • Carried out by developers • Goal: Determine if the system meets the requirements (functional and nonfunctional) • Acceptance Testing • Evaluates the system delivered by developers • Carried out by the client. May involve executing typical transactions on site on a trial basis • Goal: Demonstrate that the system meets the requirements and is ready to use.
When should you write a test? • Traditionally after the source code is written • In XP before the source code written • Test-Driven Development Cycle • Add a test • Run the automated tests => see the new one fail • Write some code • Run the automated tests => see them succeed • Refactor code.
UnitTesting IntegrationTesting SystemTesting AcceptanceTesting Unit Testing • Static Testing (at compile time) • Review • Walk-through (informal) • Code inspection (formal) • Dynamic Testing (at run time) • Black-box testing • White-box testing.
Black-box testing UnitTesting IntegrationTesting SystemTesting AcceptanceTesting • Focus: I/O behavior • If for any given input, we can predict the output, then the component passes the test • Goal: Reduce number of test cases by equivalence partitioning: • Divide input conditions into equivalence classes • Choose test cases for each equivalence class.
Black-box testing: Test case selection a) Input is valid across range of values • Developer selects test cases from 3 equivalence classes: • Below the range • Within the range • Above the range b) Input is only valid if it is a member of a discrete set • Developer selects test cases from 2 equivalence classes: • Valid discrete values • Invalid discrete values
Black box testing: An example public class MyCalendar { public int getNumDaysInMonth(int month, int year) throws InvalidMonthException { … } } • Representation for month: • 1: January, 2: February, …., 12: December • Representation for year: • 1904, … 1999, 2000,…, 2006, … • How many test cases do we need for the black box testing of getNumDaysInMonth()?
Black box testing: Gregorian Calendar • Month parameter equivalence classes • 30 days • 31 days • February • No month 0, 13, -1 [throw exception] • Year parameter equivalence classes • Normal year • Leap year: • /4 • /100 • /400
White-box testing overview UnitTesting IntegrationTesting SystemTesting AcceptanceTesting • Code coverage • Branch coverage • Condition coverage • Path coverage
if ( i = TRUE) printf("YES\n"); else printf("NO\n"); Test cases: 1) i = TRUE; 2) i = FALSE White-box Testing (Continued) • Statement Testing (Algebraic Testing): Test single statements (Choice of operators in polynomials, etc) • Loop Testing: • Cause execution of the loop to be skipped completely. (Exception: Repeat loops) • Loop to be executed exactly once • Loop to be executed more than once • Path testing: • Make sure all paths in the program are executed • Branch Testing (Conditional Testing): Make sure that each possible outcome from a condition is tested at least once
White-box Testing Example FindMean(float Mean, FILE ScoreFile) { SumOfScores = 0.0; NumberOfScores = 0; Mean = 0; Read(Scor eFile, Score); /*Read in and sum the scores*/ while (! EOF(ScoreFile) { if ( Score > 0.0 ) { SumOfScores = SumOfScores + Score; NumberOfScores++; } Read(ScoreFile, Score); } /* Compute the mean and print the result */ if (NumberOfScores > 0 ) { Mean = SumOfScores/NumberOfScores; printf("The mean score is %f \n", Mean); } else printf("No scores found in file\n"); }
1 2 3 4 5 6 7 8 9 White-box Testing Example: Determining the Paths • FindMean (FILE ScoreFile) • { float SumOfScores = 0.0; • int NumberOfScores = 0; • float Mean=0.0; float Score; • Read(ScoreFile, Score); • while (! EOF(ScoreFile) { • if (Score > 0.0 ) { • SumOfScores = SumOfScores + Score; • NumberOfScores++; • } • Read(ScoreFile, Score); • } • /* Compute the mean and print the result */ • if (NumberOfScores > 0) { • Mean = SumOfScores / NumberOfScores; • printf(“ The mean score is %f\n”, Mean); • } else • printf (“No scores found in file\n”); • }
Finding the Test Cases Start 1 a (Covered by any data) 2 b (Data set must contain at least one value) 3 (Positive score) d e (Negative score) c 5 4 (Data set must h (Reached if either f or g f be empty) 6 e is reached) 7 j i (Total score > 0.0) (Total score < 0.0) 9 8 k l Exit
Test Cases • Test case 1 : ? (To execute loop exactly once) • Test case 2 : ? (To skip loop body) • Test case 3: ?,? (to execute loop more than once) • These 3 test cases cover all control flow paths
1. Create unit tests when object design is completed Black-box test: Test the functional model White-box test: Test the dynamic model 2. Develop the test cases Goal: Find effective num-ber of test cases 3. Cross-check the test cases to eliminate duplicates Don't waste your time! 4. Desk check your source code Sometimes reduces testing time 5. Create a test harness Test drivers and test stubs are needed for integration testing 6. Describe the test oracle Often the result of the first successfully executed test 7. Execute the test cases Re-execute test whenever a change is made (“regression testing”) 8. Compare the results of the test with the test oracle Automate this if possible. Unit Testing Heuristics
JUnit: Overview UnitTesting IntegrationTesting SystemTesting AcceptanceTesting • A Java framework for writing and running unit tests • Test cases and fixtures • Test suites • Test runner • Written with “test first” and pattern-based development in mind • Tests written before code • Allows for regression testing • Facilitates refactoring • JUnit is Open Source • www.junit.org • JUnit Version 4, released Mar 2006
Test TestResult run(TestResult) TestCase TestSuite testName:String run(TestResult) run(TestResult) addTest() setUp() tearDown() runTest() UnitToBeTested ConcreteTestCase setUp() tearDown() runTest() JUnit Classes * Methods under Test
An example: Testing MyList • Unit to be tested • MyList • Methods under test • add() • remove() • contains() • size() • Concrete Test case • MyListTestCase
Test TestResult run(TestResult) TestCase TestSuite testName:String run(TestResult) run(TestResult) addTest() setUp() tearDown() runTest() * MyList MyListTestCase add()remove() contains() size() setUp()tearDown() runTest() testAdd() testRemove()
* Test TestResult run(TestResult) TestSuite TestCase testName:String run(TestResult) run(TestResult) addTest() setUp() tearDown() runTest() MyList MyListTestCase add()remove() contains() size() setUp()tearDown() runTest() testAdd() testRemove() Writing TestCases in JUnit publicclass MyListTestCase extends TestCase { public MyListTestCase(String name) { super(name); } publicvoid testAdd() { // Set up the test List aList = new MyList(); String anElement = “a string”; // Perform the test aList.add(anElement); // Check if test succeeded assertTrue(aList.size() == 1); assertTrue(aList.contains(anElement)); } protectedvoid runTest() { testAdd(); } }
Test Case Test Case Test Fixture Writing Fixtures and Test Cases publicclass MyListTestCase extends TestCase { // … private MyList aList; private String anElement; publicvoid setUp() { aList = new MyList(); anElement = “a string”; } publicvoid testAdd() { aList.add(anElement); assertTrue(aList.size() == 1); assertTrue(aList.contains(anElement)); } publicvoid testRemove() { aList.add(anElement); aList.remove(anElement); assertTrue(aList.size() == 0); assertFalse(aList.contains(anElement)); }
Test run(TestResult) TestCase TestSuite testName:String run(TestResult) run(TestResult) addTest() setUp() tearDown() runTest() Collecting TestCases into TestSuites publicstatic Test suite() { TestSuite suite = new TestSuite(); suite.addTest(new MyListTest(“testAdd”)); suite.addTest(new MyListTest(“testRemove”)); return suite; } * Composite Pattern!
Test TestResult Command Pattern run(TestResult) Composite Pattern TestCase TestSuite testName:String run(TestResult) run(TestResult) addTest() setUp() tearDown() runTest() TestedUnit ConcreteTestCase Adapter Pattern setUp() tearDown() runTest() Design patterns in JUnit *
Test TestResult Command Pattern run(TestResult) Composite Pattern TestCase TestSuite testName:String run(TestResult) run(TestResult) addTest() setUp() tearDown() runTest() TestedUnit ConcreteTestCase Adapter Pattern setUp() tearDown() runTest() Design patterns in JUnit *
Overview • Integration testing • Big bang • Bottom up • Top down • Sandwich • Continuous • System testing • Functional • Performance • Acceptance testing • Summary
Integration Testing • The entire system is viewed as a collection of subsystems (sets of classes) determined during the system and object design • Goal: Test all interfaces between subsystems and the interaction of subsystems • The Integration testing strategy determines the order in which the subsystems are selected for testing and integration.
Why do we do integration testing? • Unit tests only test the unit in isolation • Many failures result from faults in the interaction of subsystems • Often many Off-the-shelf components are used that cannot be unit tested • Without integration testing the system test will be very time consuming • Failures that are not discovered in integration testing will be discovered after the system is deployed and can be very expensive.
Stubs and drivers • Driver: • A component, that calls the TestedUnit • Controls the test cases • Stub: • A component, the TestedUnitdepends on • Partial implementation • Returns fake values. Driver Tested Unit Stub
A SpreadSheetView B C D Entity Model Calculator Currency Converter E G F BinaryFileStorage CurrencyDataBase XMLFileStorage Example: A 3-Layer-Design (Spreadsheet) A A SpreadSheetView Layer I B C D DataModel B Calculator C D Currency Converter Layer II E G F BinaryFileStorage E G CurrencyDataBase Layer III XMLFileStorage F
Big-Bang Approach A B C D Test A Test B E G F Test C Test A, B, C, D, E, F, G Test D Test E Test F Unit tests each of the subsystems, and then does one gigantic integration test, in which all the subsystems are immediately tested together. Test G Not so good. Why?
Big Bang Approach • Pro • Easy • No additional test drivers and stubs are needed • Con • Impossible to distinguish failures in the interface from the failures within a component • Difficult to pinpoint the specific component responsible for the failure
Bottom-up Testing Strategy • The subsystems in the lowest layer of the call hierarchy are tested individually • Then the next subsystems are tested that call the previously tested subsystems • This is repeated until all subsystems are included • Drivers are needed.
A B C D E G F Bottom-up Integration A B C D Test E E G F Test B, E, F Test F Test A, B, C, D, E, F, G Test C Test D Test D,G Test G
Bottom-up • Pro: • Simple • Con: • Subsystem interfaces are not tested • Impossible to distinguish failures in interfaces from failures within a component • Difficult to pinpoint the specific component responsible for the failure
Pros and Cons of Bottom-Up Integration Testing • Con: • Tests the most important subsystem (user interface) last • Drivers needed • Pro • No stubs needed • Useful for integration testing of the following systems • Object-oriented systems • Real-time systems • Systems with strict performance requirements.
Top-down Testing Strategy • Test the top layer or the controlling subsystem first • Then combine all the subsystems that are called by the tested subsystems and test the resulting collection of subsystems • Do this until all subsystems are incorporated into the test • Stubs are needed to do the testing.
Top-down Integration A B C D E G F Test A, B, C, D, E, F, G Test A, B, C, D Test A Layer I Layer I + II All Layers
Pros and Cons of Top-down Integration Testing Pro • Test cases can be defined in terms of the functionality of the system (functional requirements) • No drivers needed Cons • Writing stubs is difficult: Stubs must allow all possible conditions to be tested. • Large number of stubs may be required, especially if the lowest level of the system contains many methods. • Some interfaces are not tested separately.
Sandwich Testing Strategy • Combines top-down strategy with bottom-up strategy • The system is viewed as having three layers • A target layer in the middle • A layer above the target • A layer below the target • Testing converges at the target layer.
Sandwich Testing Strategy A B C D Test A E G F Test A,B,C, D Test E Test A, B, C, D, E, F, G Test B, E, F Test F Test D,G Test G