1 / 23

Chapter 8

Chapter 8. Testing the Programs. Integration Testing. Combine individual comp., into a working s/m. Test strategy gives why & how comp., are combined to test the working s/m. Strategy affects not only the integration timing, coding, cost & thoroughness of the testing.

aelan
Download Presentation

Chapter 8

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 8 Testing the Programs

  2. Integration Testing • Combine individual comp., into a working s/m. • Test strategy gives why & how comp., are combined to test the working s/m. • Strategy affects not only the integration timing, coding, cost & thoroughness of the testing.

  3. S/m viewed as hierarchy of comps. • Approaches • Bottom-up • Top-down • Big-bang • Sandwich testing • Modified top-down • Modified sandwich

  4. Bottom-up Integration • Merging comp., • Lowest level of s/m hierarchy is tested individually first then next comp., are those that call the previously tested comp., • Component Driver: a routine that calls a particular component and passes a test case to it. • Drawback – functionally decomposed s/m – is that top level comp., are important but last to be tested. • Top level are more general , whereas bottom level are more specific. • Bottom up testing is most sensible for OOpgms.,

  5. System viewed as a hierarchy of components • The sequence of tests and their dependencies • We need comp., driver for E,F,G

  6. Top-Down Integration • Reverse of bottom-up • Top level – one controlling comp., is tested by itself. • Then all comps., called by the tested comp., are combined & tested • Stub: a special-purpose program to simulate the activity of the missing component • The stub answers the calling seq., & passes back o/p data that lets the testing process continue.

  7. Only A is tested by itself • Stubs needed for B,C & D. • Top down allows the test team to exercise one fn., at a time • Test cases defined in terms of fns., being examined. • Design faults abt fnality., addressed at the beginning of testing. • Drivers pgms., are not needed. • Writing stubs can be difficult, its correctness may affect the validity of the test.

  8. Drawback with top-down testing is possibility that a very large no., of stubs may be required. • One way to avoid is to alter the strategy slightly • Modified Top-Down Integration • Each level’s components individually tested before the merger takes place.

  9. Bing-Bang Integration • Used for small s/m, not practical for large s/m. • First, it requires both stubs and drivers to test the independent components • Second, b’coz all comp., are merged at once, it is difficult to find the cause of failures. • Finally, i/f faults cannot be distinguished easily

  10. Sandwich Integration • Combines top down with bottom-up • Viewed system as three layers: target layer in middle, levels above the target , levels below the target. • Top down in top level & bottom up in lower layer. • Testing converges towards target layer – chosen on the basis of s/m char., & struc., of comp., hierarchy. • Ex., comp., hierarchy • Target layer in the middle level , comp., B, C, D • Drawback – it does not test the indiv., comp before integration.

  11. Modified Sandwich Integration • Allows upper-level components to be tested before merging them with others

  12. Comparison of Integration Strategies

  13. Testing Object-Oriented Systems • Rumbaugh ask several ques.,: • Is there a path that generates a unique result? • Is there a way to select a unique result? • Are there useful cases that are not handled? • Next check obj., & classes themselves for excesses & deficiencies • Missing obj, useless classes, associations or attributes

  14. Differences Between OO and Traditional Testing • OO comp., - reused, helps minimize testing • First, test – base classes having no parents – test each fn., individually then the interactions among them. • Next – provide an alg., to update incrementally the test history for parent class. • OO unit testing is less difficult, but integration testing is more extensive

  15. The farther the gray line is out, the more the difference Requires special treatment

  16. Test Planning • Test planning helps us to design & organize the tests • Test steps: • Establish test objectives • Design test cases • Write test cases • Testing test cases • Execute tests • Evaluate test results • Test obj., - tells us what kind of test cases to generate.

  17. Purpose of the Plan • Test plan explains • who does the testing? • why the tests are performed? • how tests are conducted? • when the tests are scheduled? • Contents of the Plan • What the test objectives are? • How the test will be run? • What criteria will be used to determine when the testing is complete? • Statement, branch, & path coverage at the comp., level • Top-down , bottom-up at the integration level • Resulting test plan for merging the comps., into a whole sometimes called S/m Integration Testing

  18. Automated Testing Tools • Code Analysis Tool • Static analysis – src pgm before it is run • Code analyzer • Structure checker • Data analyzer • Sequence checker • Dynamic analysis – pgm is running • Program monitors: watch and report program’s behavior • Test execution – automate the planning & running of the test themselves • Capture and replay • Stubs and drivers • Automated testing environments • Test case generators – automate test case generation

  19. When to Stop Testing • More faulty? • Probability of finding faults during the development

  20. Stopping Approaches • Fault seeding or error seeding – to estimate no., of faults in a pgm • One team – seeds or inserts a known no., of faults • Other team – locate as many faults as possible • No. of undiscovered faults acts as indicator • The ratio: detected seeded Faults detected nonseeded faults total seeded faults total nonseeded faults • Expressing ratio formally : N = Sn /s • N – no., of nonseeded faults • S – no., seeded faults placed in a pgm. • n – actual no., of nonseeded faults detected during testing • s – no., of seeded faults detected during testing

  21. Confidence in the software, - use fault estimate to tell how much confidence we can place in the s/w we are testing. • Confidence expressed in percentage. = 1, if n >N C = S/(S – N + 1), if n ≤ N • Coverage criteria

  22. Identifying Fault-Prone Code • Track the number of faults found in each component during the development • Collect measurement (e.g., size, number of decisions) about each component • Classification trees : a statistical technique that sorts through large arrays of measurement information and creates a decision tree to show best predictors • A tree helps in deciding the which components are likely to have a large number of errors

  23. An Example of a Classification Tree

More Related