1 / 53

Fault Identification and Testing Strategies

This chapter explores the different types of faults in software and their classification. It also discusses testing issues, including unit testing, integration testing, and test planning. Learn when to stop testing and how to apply these concepts to real-world examples.

corlissd
Download Presentation

Fault Identification and Testing Strategies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 8 Testing the Programs Shari L. Pfleeger Joann M. Atlee 4th Edition

  2. Contents 8.1 Software Faults and Failures 8.2 Testing Issues 8.3 Unit Testing 8.4 Integration Testing 8.5 Testing Object Oriented Systems 8.6 Test Planning 8.7 Automated Testing Tools 8.8 When to Stop Testing 8.9 Information System Example 8.10 Real Time Example 8.11 What this Chapter Means for You

  3. Chapter 8 Objectives Types of faults and how to classify them The purpose of testing Unit testing Integration testing strategies Test planning When to stop testing

  4. 8.1 Software Faults and FailuresWhy Does Software Fail? A wrong or missing requirement Not what the customer wants or needs A requirement that is impossible to implement Given prescribed hardware and software A faulty system design A database design restrictions A faulty program design The program code may be wrong

  5. 8.1 Software Faults and FailuresObjective of Testing • Objective of testing: discover faults • A test is successful only when a fault is discovered • Fault identification is the process of determining what fault(s) caused the failure • Fault correction is the process of making changes to the system so that the faults are removed

  6. 8.1 Software Faults and FailuresTypes of Faults • Algorithmic fault or syntax faults • A component’s logic doesn’t produce proper output • Computation and precision faults • A formula’s implementation is wrong • Documentation faults • The documentation doesn’t match what a program does • Stress or overload faults • Data structures filled past their specified capacity • Capacity or boundary faults • The system’s performance becomes unacceptable as activity reaches its specified limit

  7. 8.1 Software Faults and FailuresTypes of Faults (continued) • Timing or coordination faults • Code executing in improper timing • Performance faults • System does not perform at the speed prescribed • Recovery faults • Failures don’t behave as required • Hardware and system software faults • Supplied hardware and system software do not work according to the documented conditions and procedures • Standard and procedure faults • Code doesn’t follow organizational or procedural standards

  8. 8.1 Software Faults and FailuresTypical Algorithmic Faults An algorithmic fault occurs when a component’s algorithm or logic does not produce proper output Branching too soon Branching too late Testing for the wrong condition Forgetting to initialize variable or set loop invariants Forgetting to test for a particular condition Comparing variables of inappropriate data types Syntax faults

  9. 8.1 Software Faults and FailuresOrthogonal Defect Classification • Historical information leads to trends • Trends can lead to changes in designs or requirements • Leading to reduced number of faults injected • How: • Categorize faults using IBM Orthogonal Classifications • Track faults of omission and commission also • Must be product- and organization-independent • Scheme must be applicable to all development stages

  10. 8.1 Software Faults and FailuresOrthogonal Defect Classification

  11. 8.1 Software Faults and FailuresSidebar 8.1 Hewlett-Packard’s Fault Classification

  12. 8.1 Software Faults and FailuresSidebar 8.1 Faults for one Hewlett-Packard Division

  13. 8.2 Testing IssuesTest Organization • Module testing, component testing, or unit testing • Integration testing • Function testing • Performance testing • Acceptance testing • Installation testing • System testing

  14. 8.2 Testing IssuesTesting Organization Illustrated

  15. 8.2 Testing IssuesAttitude Toward Testing • The problem: • In academia students are given a grade for the correctness and operability of their programs • Test cases generated to show correctness • Critiques of program are considered critiques of ability • The solution: • Egoless programming; programs are viewed as components of a larger system, not as the property of those who wrote them • Development team focused on correcting faults, not placing blame

  16. 8.2 Testing IssuesWho Performs the Test? • Independent test team • Avoid conflict for personal responsibility for faults • Improve objectivity between design and implementation • Allow testing and coding concurrently

  17. 8.2 Testing IssuesViews of the Test Objects • Closed box or black box • Functionality of the test objects • No view of code or data structure – input and output only • Clear box or white box • Structure of the test objects • Internal view of code and data structures

  18. 8.2 Testing IssuesClear Box • Example of logic structure

  19. 8.2 Testing IssuesFactors Affecting the Choice of Test Philosophy • White box or Black box testing • The number of possible logical paths • The nature of the input data • The amount of computation involved • The complexity of algorithms • Don’t have to chose! • Combination of each could be the right approach

  20. 8.3 Unit TestingCode Review • Code walkthrough • Present code and documentation to review team • Team comments on correctness • Focus is on the code not the coder • No influence on developer performance • Code inspection • Check code and documentation against list of concerns • Review correctness and efficiency of algorithms • Check comments for completeness • Formalized process

  21. 8.3 Unit TestingTypical Inspection Preparation and Meeting Times

  22. 8.3 Unit TestingFault Discovery Rate

  23. 8.3 Unit TestingSidebar 8.3 The Best Team Size for Inspections • The preparation rate, not the team size, determines inspection effectiveness • The team’s effectiveness and efficiency depend on their familiarity with their product

  24. 8.3 Unit TestingProving Code Correct • Formal proof techniques • Write assertions to describe input\output conditions • Draw a flow diagram depicting logical flow • Generate theorems to be proven • Locate loops and define if-then assertions for each • Identify all paths from A1 to An • Cover each path so that each input assertion implies an output assertion • Prove that the program terminates • Only proves design is correct, not implementation • Expensive

  25. 8.3 Unit TestingProving Code Correct (continued) • Symbolic execution • Execution using symbols not data variables • Execute each line, checking for state • Program is viewed as a series of state changes • Automated theorem-proving • Prove software is correct by developing tools to execute it • The input data and conditions • The output data and conditions • The lines of code for the component to be tested

  26. 8.3 Unit TestingProving Code Correct: An Illustration

  27. 8.3 Unit TestingTesting versus Proving • Proving: • Hypothetical environment • Code is viewed as classes of data and conditions • Proof may not involve execution • Testing: • Actual operating environment • Demonstrate actual use of program • Series of experiments

  28. 8.3 Unit TestingSteps inChoosing Test Cases • Determining test objectives • Coverage criteria • Selecting test cases • Inputs that demonstrate the behavior of the code • Defining a test • Detailing execution instructions

  29. 8.3 Unit TestingTest Thoroughness • Statement testing • Branch testing • Path testing • Definition-use testing • All-uses testing • All-predicate-uses/some-computational-uses testing • All-computational-uses/some-predicate-uses testing

  30. 8.3 Unit TestingRelative Strengths of Test Strategies

  31. 8.3 Unit TestingComparing Techniques • Fault discovery Percentages by Fault Origin

  32. 8.3 Unit TestingComparing Techniques (continued) • Effectiveness of fault-discovery techniques

  33. 8.3 Unit TestingSidebar 8.4 Fault Discovery Efficiency at Contel IPC • 17.3% during inspections of the system design • 19.1% during component design inspection • 15.1% during code inspection • 29.4% during integration testing • 16.6% during system and regression testing • 0.1% after the system was placed in the field

  34. 8.4 Integration Testing • Bottom-up • Top-down • Big-bang • Sandwich testing • Modified top-down • Modified sandwich

  35. 8.4 Integration TestingTerminology • Component Driver: a routine that calls a particular component and passes a test case to it • Stub: a special-purpose program to simulate the activity of the missing component

  36. 8.4 Integration TestingView of a System • System viewed as a hierarchy of components

  37. 8.4 Integration TestingBottom-Up Integration Example • The sequence of tests and their dependencies

  38. 8.4 Integration TestingTop-Down Integration Example • Only A is tested by itself

  39. 8.4 Integration TestingBing-Bang Integration Example • Requires both stubs and drivers to test the independent components

  40. 8.4 Integration TestingSandwich Integration Example • Viewed system as three layers

  41. 8.5 Testing Object-Oriented SystemsTesting the Code • Questions at the Beginning of Testing OO Systems • Is there a path that generates a unique result? • Is there a way to select a unique result? • Are there useful cases that are not handled? • Check objects for excesses and deficiencies • Missing objects • Unnecessary classes • Missing or unnecessary associations • Incorrect placement of associations or attributes

  42. 8.5 Testing Object-Oriented SystemsTesting the Code • Objects might be missing if: • You find asymmetric associations or generalizations • You find disparate attributes and operations on a class • One class is playing two or more roles • An operation has no good target class • You find two associations with the same name and purpose • Class might be unnecessary if: • It has no attributes, operations or associations • Associations might be unnecessary if: • It has redundant information or no operation uses the path

  43. 8.5 Testing Object-Oriented SystemsEasier and Harder Parts of Testing OO Systems • OO unit testing is less difficult, but integration testing is more extensive

  44. 8.6 Test Planning • Establish test objectives • Design test cases • Write test cases • Testing test cases • Execute tests • Evaluate test results

  45. 8.6 Test PlanningPurpose of the Plan • Test plan explains • who does the testing • why the tests are performed • how tests are conducted • when the tests are scheduled • Test plan describes • That the software works correctly and is free of faults • That the software performs the functions as specified • The test plan is a guide to the entire testing activity

  46. 8.6 Test PlanningContents of the Plan • What the test objectives are • How the test will be run • Criteria to determine that testing is complete • Detailed list of test cases • How test data is generated • How output data will be captured • Complete picture of how and why testing will be performed

  47. 8.7 Automated Testing Tools • Code analysis • Static analysis • code analyzer • structure checker • data analyzer • sequence checker • Output from static analysis

  48. 8.8 When to Stop TestingMore faulty? • Probability of finding faults during development • Make it hard to tell when we are done testing…

  49. 8.8 When to Stop TestingStopping Approaches • Fault seeding • Intentionally inserts a known number of faults • Undiscovered faults leads to total faults : detected seeded Faults =detected nonseeded faults total seeded faults total nonseeded faults • Improved by basing number of seeds on historical data • Improved by using two independent test groups • Comparing results • Coverage Criteria • Determine how many statement, path or branch tests are required • Use total to track completeness of tests

  50. 8.8 When to Stop TestingStopping Approaches (continued) • Confidence in the software • Based on fault estimates to find confidence • e.g. 95% confident the software is fault free • Where S = number of seeds and N = number of actual faults, n = number of found faults C = 1, if n > N = S/(S – N + 1) if n ≤ N • Assumes all faults have equal probability of being detected

More Related