1 / 37

Software Testing

Software Testing. Chuck Cusack Based on material from Dick Hamlet and Joe Maybee, The Engineering of Software , Addison Wesley Longman, inc., 2001 Ian Sommerville, Software Engineering, 6 th Edition , Pearson Education Limited, 2001 Sebastian Elbaum, Software engineering lecture notes.

paiva
Download Presentation

Software Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Testing Chuck Cusack Based on material from • Dick Hamlet and Joe Maybee, The Engineering of Software, Addison Wesley Longman, inc., 2001 • Ian Sommerville, Software Engineering, 6th Edition, Pearson Education Limited, 2001 • Sebastian Elbaum, Software engineering lecture notes

  2. The Software Life Cycle • Before we can discuss testing, we need to present the context of testing—the software life cycle. • There are various forms of the software life cycle.

  3. The Life Cycle and Testing • Although testing is not explicitly mentioned at each stage of the lifecycle, every stage has some relationship to testing.

  4. Software That Works • When developing software, it is important that • it work according to the specification, and • not contain any errors. • Even perfect design and implementation can rarely guarantees the first, and never the second. • The solution is verification and validation. • Validation: Are we building the right product? • Verification: Are we building the product right? • There are two techniques. • Software inspection(static) • Software testing(dynamic)

  5. Software Inspection • Software inspection involves analyzing • Requirements • Design diagrams • Source code • As the name indicates, this is performed by looking at the various components (called a walkthrough), not by running any of the code. • Because final code is not required, inspection can be performed at all stages of the life cycle. • There is much more to inspection than this, but that is not the subject of these notes.

  6. Software Testing • Software testing involves running the software with test data to determine whether or not the software performs as required. • There are a several important questions that need to be answered. • When do we develop test data? • What are good test data? • When should testing be done? • How should testing be done? • When should you stop? • These notes attempt to answer these questions.

  7. Why Test: The Optimist • Optimist: The purpose of software testing is to verify that software meets the requirements. • The optimistic view has several problems: • It is virtually impossible to prove that software performs correctly for all possible inputs/situations. • It is in human nature to find what one is looking for. Thus, we can too easily be convinced that the software is correct, when there are (sometimes very subtle) errors.

  8. Why Test: The Pessimist • Pessimist: The purpose of software testing is to find errors. • Although bleak, this is a much better definition. • There is still a slight problem. • What is an error? Errors: • Failure: When the software does something wrong (does not do as specified). • Fault: The error (bug, defect) in the software (the code) which causes a failure.

  9. Why Test: The Realist • Realist:The purpose of software testing is to find failures, so the faults causing the failures can be found and fixed. • This view agrees with the pessimist, but is more precise. • Put another way: “Try to break the system.” • This is a better view than the optimist, since: • If we look for errors, we will probably find some. • When errors are found and fixed, we know that the software is one step closer to being correct.

  10. The “Not”s of Testing • Debugging is not the same as testing. • Debug: Fix faults/remove failures. • Test: Find faults/discover failures. • Testing cannot show the absence of errors, only show the presence of them. • A system cannot (usually) be completely tested. • It cannot be assumed users will not make errors. In other words, users will make errors.

  11. Test Cases and Test Suite • A test case is a set of input data and the expected output. • A test suite is a set of test cases. • The important questions are: • What makes a good test case? • How do you determine the expected output? • When does a test suite contain enough test cases? • We will attempt to answer these questions with a few suggestions, and several examples.

  12. Choosing Test Cases • When choosing test cases, it is important to follow these principles: • Enough test cases should be chosen so that it is fairly certain that the software will work in all situations. • Both valid and invalid inputs should be tested. • Test cases on and near “boundaries” should be chosen. • The possible inputs should be divided into equivalence partitions, with “just enough” test cases from each equivalence partition. • If the system has “states,” sequences of test cases may be required.

  13. Important Test Cases • Strings/arrays of various sizes, including 0 (empty), 1, 2, and larger. • Strings/arrays that are too long. • Strings containing spaces. • Empty input files. • The first, last, and middle element of an array/string. • Index into array/string that is too small/large. • Loops that execute 0 times. • For ADTs (linked-lists, sets, stacks, and queues), empty, partially full, and completely full (if applicable) data structures.

  14. Equivalence Partitions • If a program breaks an array into parts, each part of the array should be treated as an equivalence partition, and the test cases for each partition should include boundaries. • If an input has restrictions, the partitions should correspond to the restrictions. • For instance, if the input should be between 0 and 10, the partitions include numbers less than 0, between 0 and 10, and greater than 10. • Test cases should include –1, 0, 1, 5, 9, 10, and 11, at a minimum.

  15. Example: Binary Search • Recall the binary search algorithm: int BinarySearch(int []A,int val, int left,int right) { if(left<right) { int middle=(left+right)/2; if(A[middle] == val ) return middle else if (A[middle] > val) return BinarySearch(A,val,left,middle-1); else return BinarySearch(A,val,middle+1,right); } else { return -1; }

  16. Partitions for Binary Search • The algorithm partitions the array into 3 parts: • This suggests we break the test cases into 4 parts—One for each of the above parts, and one for a failed search. • For each partition, include the boundaries and some case in the middle of each partition (x and y above). • For a failed search, include values smaller and larger than all values, and one missing in the middle. • We should also include an array of size 1.

  17. A Test Suite for Binary Search • One test suite for binary search might be:

  18. Example: Loan Calculator • Specification: The system should allow the user to specify a principle amount, interest rate, and term, and compute the total amount of the loan. • The first questions that should come to mind are: • Can interest rate, principle amount, and/or term be zero? negative? • Should the interest rate be given as a percentage (5 for 5%) or decimal (.05 for 5%). • We will assume no, no, and decimal. • Given this, what test cases are appropriate?

  19. Loan Calculator Test Cases • Test cases should include negative, zero, and positive values for all of the inputs. • Does this require 27 test cases? • If we do not know how the system is to be implemented, this may be the case, since it is possible that it handles one bad input, but not two. • We will assume it is clear from the design and/or code that if any one or more of the inputs is not positive, the same code is executed.

  20. Loan Calculator Test Suite • The following set of test cases seems reasonable. • I have not included the amount (the output), but it should certainly be included in a real test suite. • Also, where amount says “Invalid”, the real result would depend on what the specification states.

  21. Example: Stack ADT • The stack ADT (LIFO) is a data type that supports the following operations • bool Push(item X) Place X on the top of the list. • item Pop( ) Remove and return the top element. • item Peek( ) Return the top element. • When thinking of test cases, we should consider • an empty stack • a full stack, and • a partially full stack.

  22. Testing A Stack • Since we cannot look inside the ADT, how do we test the operations? • To test Push, we need to use either Pop or Peek. • If there is a problem, what caused it? • To further complicate things, is a newly created empty stack the same as a stack that has had 3 elements pushed, and then popped?

  23. A Stack Test Plan • We present a test suite, assuming a stack that holds a maximum of 3 elements. • The test cases in the following test plan should be tested in order on a single instance of a stack object.

  24. Test Case Timeline • As the specification, design, and coding phases are in progress, test cases may come to mind. • Things are easily forgotten, so it is important to write these cases down, even though (in fact, especially since) testing may not occur for some time. • When the time for testing arrives, insights have been gathered from a variety of people during the entire life cycle, making it likely that many of the problems have been anticipated.

  25. Testing: The Boxes • Black-box testing (functional, specification-based) • Based only on the specification and design process. • White-box testing (structural, program-based, code-based, systematic, clear-box, glass-box, broken-box) • Based on the actual code.

  26. Black-Box Testing • As noted before, black-box testing is based on the specification, not the code. • If the specification is written properly, some test cases should be fairly easy to generate. • It can be difficult to determine test cases that are likely to fail. • It can also be difficult to determine equivalence partitions.

  27. White-Box Testing • When implementing “tricky” code, you can develop test cases based on what might go wrong. • You can create test cases to be sure that every line of code is tested. • Finding “boundaries” can be much easier when you can look at the code. • Determining equivalence partitions can be easier. • If there are restrictions on the inputs, the code might give some indication of which invalid inputs might be most problematic.

  28. Which Box? • There is no clear answer to this question. • White-box testing is not always possible, since the source code may not be available. • Code coverage (white-box) can be misleading, since it does not guarantee that separate program units operate correctly together. • White-box and black-box testing discover different types of faults. That is, they are complementary. • Thus, picking test cases using a combination of white- and black-box testing will often result in the best test suite.

  29. Testing: Stages Testing can be separated into 5 stages: • Unit (function, component) • Module (ADT, class, group of functions) • Sub-system • System (The final product) • Acceptance (alpha) In addition, beta testing may be performed.

  30. Unit Testing • Modules (functions, components, etc.) are testing in isolation of any other code. • Unit testing can be helpful, since it gives some confidence to the correctness of the “building blocks.” • It is important to note that if all units are correct, it does not guarantee that the entire system will be correct, since the interactions between units might be incorrect.

  31. Module Testing • As noted earlier, a module can be an abstract data type (ADT), an object class, or a collection of related functions. • As with unit testing, module testing can be performed independent of the other parts of the system. • For objects classes and ADTs, testing involves ensuring the object/ADT performs all operations correctly, in all possible states of the object/ADT. • Information-hiding can make testing ADTs and object classes difficult. • For instance to test push, one must use peek or pop. If there is an error, which function caused it?

  32. Sub-system Testing • Tests collections of modules which are integrated into sub-systems. • The most common problems arise from interface mismatch, including: • Interface misuse (syntactic): e.g. wrong parameter list (wrong types, wrong order, wrong number). • Interface misunderstanding (semantic): e.g. passing an unsorted array when a sorted array is expected.

  33. System Testing • Similar to sub-system testing. • System (and sub-system) integration can be performed using • Big-bang integration: Put it all together, and test it (waiting for the “big bang”). • Incremental integration: Put it together in steps, testing as each new sub-system or module is integrated. • These have obvious implications to system testing. • For incremental integration, there are two choices: bottom-up and top-down.

  34. Top-down Integration • With top-down integration, the higher-level components are integrated and tested before the low-level components are implemented. • This requires stub code to be written that either does nothing, or simulates the low-level modules. • Example: int square(int x) { // Just return a default value return 1; } // Use a pre-defined sorting algorithm for now void quickSort(int []array,int size) { SomeLibrary.Sort(array,size); }

  35. Bottom-up Integration • With bottom-up integration, low-level components are tested before the high-level components are designed. • This requires a test driver (or test harness) to be written that runs the low-level components. • Example: int main() { for(int i=0;i<8;i++) { println(“The square of ” + i + “ is “ + square(i)); } }

  36. Acceptance Testing • When the system has been integrated, it should be tested with real data supplied by the actual user(s) of the system. • This will increase the chance that the system will be error-free, since the actual data may include cases not considered by the software team. • It will also test whether or not the requirements were correctly understood, and if the system will meet the needs it was designed to meet.

  37. Testing Summary • Testing good. • Not testing bad. • Planning for testing good. • Haphazard testing bad.

More Related