1 / 25

Software Testing Overview: Execution-Based vs. Nonexecution-Based Testing

Software testing is an essential activity throughout the software life cycle. Learn about different types of testing methods, including execution-based and nonexecution-based testing. This text explains the concepts of Verification and Validation in software quality assurance, emphasizing the importance of achieving correct functionality. Discover the processes involved in nonexecution-based testing, like reviews, walkthroughs, and inspections, and their significant impact on detecting faults in software development. Gain insights into metrics and statistics related to inspections and learn about the nuances of execution-based testing in identifying bugs and ensuring product quality.

cdurr
Download Presentation

Software Testing Overview: Execution-Based vs. Nonexecution-Based Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CHAPTER 6 TESTING

  2. Overview • Quality issues • Nonexecution-based testing • Execution-based testing • What should be tested? • Testing versus correctness proofs • Who should perform execution-based testing? • When testing stops?

  3. Testing • Testing is an integral component of software process and an activity that must be carried out throughout the life cycle • Two types of testing • Execution-based testing • Nonexecution-based testing

  4. Testing (contd) • “V & V” • Verification • Determine if the phase was completed correctly • Performed at the end of each phase • Validation • Determine if the product as a whole satisfies its requirements • Performed just before the product is delivered to the client • In this book, testing simply denotes V & V

  5. Software Quality • Quality implies “excellence” of some sort • In S/W, we do not strive for “excellence” but merely getting the software to function correctly is enough • That is, does the software satisfy its specifications? • Read Just In Case You Wanted to Know on page 138 • Software Quality Assurance (SQA) • Ensures that the product is correct • Performs testing at the end of each phase as well as at the end of the product development • Managerial independence is important • Development group • SQA group

  6. Nonexecution-Based Testing • Underlying principles • One should not review his or her own work • Why not? • Nonexecution-based testing is usually referred to as review • There are two types of reviews • Walkthroughs • Inspections

  7. Walkthroughs • Should consist of 4-6 members • For example, a specification walkthrough should include • Specification team manager & member • Client representative • Member that will perform the next phase (design team member) • SQA team member (should chair the walkthrough) • The material to be reviewed should be distributed in advance to the participants • Each reviewer should study the material and prepare two lists: • items that the reviewer does not understand • items that the reviewer believes incorrect

  8. Walkthroughs (contd) • Walkthrough process • Usually no more than 2 hours • Detect and record faults – DO NOT correct • Two ways of conducting walkthroughs • Participant driven • Participants present their lists of unclear and incorrect items • The author must respond to each query • Document driven • The author walks the participants through the document • The reviewers interrupt whenever unclear or incorrect items are presented • The document-driven approach is usually more thorough (i.e., finds more faults)

  9. Inspections • More detailed than walkthrough and has five formal steps: overview, preparation, inspection, rework and follow-up • Inspection team should consist of 4 members • For example, design inspection team includes • Moderator (inspection team leader), designer, implementer, tester

  10. Inspection Steps • Overview • An overview of the document to be reviewed is given by one of the authors • At the end of the overview session, the document is distributed to the participants • Preparation • Participants try to understand the document in detail, aided by statistics of fault types • Participants prepare the lists of unclear/incorrect items as well • Inspection • An author walks through the document with the inspection team • Faults are detected and recorded – DO NOT correct here • Within one day, the moderator generates a written report containing faults detected during inspection

  11. Inspection Steps (contd) • Rework • The author resolves all faults and problems noted in the written report • Follow-up • The rework is thoroughly checked • All fixes must be checked to ensure that no new faults have been introduced

  12. Statistics on Inspections • 82% of all detected faults (IBM, 1976) • 70% of all detected faults (IBM, 1978) • 93% of all detected faults (IBM, 1986) • 90% decrease in cost of detecting fault (Switching system, 1986) • 4 major faults, 14 minor faults per 2 hours (JPL, 1990). Savings of $25,000 per inspection • Number of faults decreased exponentially by phase (JPL, 1992)

  13. Metrics for Inspections • Fault density (e.g., faults per KLOC) • Fault detection rate (e.g., faults detected per hour) • Fault detection efficiency (e.g., the number of faults detected per person-hour)

  14. Execution-Based Testing • Definitions • Fault is the IEEE Standard terminology for “bug” • Failure is the observed incorrect behavior of the product as a consequence of the fault • Error is the mistake made by programmer • “Programming testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence [Dijkstra, 1972]

  15. What is execution-based testing? • “Execution-based testing is a process of inferring certain behavioral properties of a product based, in part, on the results of executing the product in known environment with selected inputs.” [Goodenough, 1979] • Inference (i.e., deriving logical conclusion) • Known environment • Selected inputs • What must be tested? => Behavioral properties of the product must be tested • Utility, reliability, robustness, performance and correctness

  16. Utility • Utility is the extent to which a user’s needs are met when a correct product is used under conditions permitted by its specifications • That is, does it meet user’s needs? • Ease of use • Useful functions • Cost-effectiveness

  17. Reliability • Reliability is a measure of the frequency and criticality of product failure • How often does the product fail? (Mean time between failures, MTBF) • How long does it take to restore? (Mean time to restore, MTTR) • But often more important is how long it takes to repair the results of the failure?

  18. Robustness • Robustness is a function of a number of factors such as • Range of operating conditions • Possibility of unacceptable results with valid input • Effect of invalid input

  19. Performance • Extent to which space and time constraints are met • Real-time systems have hard time constraints

  20. Correctness • A product is correct if it satisfies its specifications • But what if the specifications themselves are incorrect?

  21. Correctness of specifications • Incorrect specification for a sort • Function trickSort which satisfies this specification:

  22. Correctness of specifications (contd) • Incorrect specification for a sort: • Corrected specification for the sort:

  23. Correctness Proofs • Alternative to execution-based testing • Read Section 6.5 on your own

  24. Who Performs Execution-Based Testing? • Testing is destructive • A successful test finds a fault • Solution • 1. The programmer does informal testing • 2. SQA does systematic, thorough testing • 3. The programmer debugs the module • All test cases must be • Planned beforehand, including expected output • Retained afterwards

  25. When Testing Can Stop? • Only when the product has been irrevocably retired • Read Chapter 6

More Related