1 / 20

Object-Oriented Software Engineering Practical Software Development using UML and Java

Object-Oriented Software Engineering Practical Software Development using UML and Java. Chapter 10: Testing and Inspecting to Ensure High Quality Part 3: Documentation Defects Writing Formal Test Cases and Developing Formal Test Plans. 10.7 Documentation Defects.

cade
Download Presentation

Object-Oriented Software Engineering Practical Software Development using UML and Java

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Object-Oriented Software EngineeringPractical Software Development using UML and Java Chapter 10: Testing and Inspecting to Ensure High Quality Part 3: Documentation Defects Writing Formal Test Cases and Developing Formal Test Plans

  2. 10.7 Documentation Defects • We’ve talked about program and design defects. Now: • Defect: • The software has a defect if: • 1) the user manual, • 2) reference manual or • 3) on-line help: • gives incorrect information • fails to give information relevant to a problem. • Testing strategy: • Examine all the end-user documentation, making sure it is correct. • Work through the use cases, making sure each potential problem is adequately explained to the user. • Story: reading levels; SCCQ. 20

  3. 10.8 Writing Formal Test Cases and Test Plans • A test case is an explicit set of instructionsdesigned to detect a particular class of defects in a software system. • Note: the test cases are DESIGNED to detect errors!!!! • There may be a number of individual tests in a test case, such as invalid inputs, invalid formats, missing attributes / fields, and more. A single test case may test for a number of these • Lots of errors (combinations!) arise when more than one field is in error…. • Each test is a particular running of the test case on a particular version of the system. • All are included in a suite of tests… 20

  4. Test Plans – A Critical Document. • Document contains a complete set of test cases for a system (Test Plan  Test Cases  Tests ) (I call the test cases a ‘suite of tests’) • Along with other information about the testing process. • Who does what; when; how; anticipated results; …. • The Test Plan is one of the standard forms of documentation. • Test Plan - Writtenlong before the testing starts. • Test Plan Development: • once you have developed the requirements, planning for testing should start to occur at ‘that’ time. • Many types of tests can be developed directly from Use Cases. • Why not all from the use cases? Discuss. • If a project does not have a test plan: • Testing will inevitably be done in an ad-hoc manner. • This will lead to poorqualitysoftware. 20

  5. Information to include in a Formal Test case A. Identification and classification: • Each test case should have a number, title (optional). • Indicate system, subsystem or module being tested • Test case importance – indicate this. B. Instructions: • Tell the tester exactly what to do. • Tester should notnormally have to refer to any other documentation in order to execute the instructions. C. Expected result: • Cite expectedresults and providetotester • Report test failure if the expected results are not produced. • All organizations will have some formal procedure for documenting the results of independent testing; failures; successes. D. Cleanup (when needed): • Inform tester how to restart / recover in the event of failure. • Further, this too should be a specific test. 20

  6. All Test Cases are Not Created Equal… • Level 1: • Critical test cases. • Designed to verify the system runs and is safe. • Level 2: • General test cases. • Verify that day-to-day functions correctly. • Still permit testing of other aspects of the system. • Level 3: • Detailed test cases. • Test requirements that are of lesser importance. • Somewhere in these levels may very well be the testing of non-functional requirements such as testing under peak loading (numbers of simultaneous users, files being accessed, etc.), reliability, performance, recovery, simultaneous / concurrent database access, remote object interfacing, … and more. 20

  7. Determining Test Cases by Enumerating Attributes •  Important test cases test every aspect of the requirements. • Each detail in the requirements is called an attribute. • An attribute can be thought of as something that is testable. • A good first step when creating a set of test cases is to enumerate the attributes. What is it you are trying to demonstrate/validate? • Implicit Attributes: There are often many attributes that are implicit. • Here consider explicit (use cases and non-functional requirements) and implied attributes. • Let’s now turn our attention to testing strategies. 20

  8. 10.9 Strategies for Testing Large Systems • Big Bang Testing versus Integration Testing • Big BangTesting, you take the entire system and test it as a unit • Incremental Testing: A better strategy in most cases. • Test each individual subsystem in isolation • Continue testing as you add more and more subsystems to the final product • Incremental testing can be performed horizontally or vertically, depending on the architecture • Horizontal testing can be used when the system is divided into separate sub-applications 20

  9. Top Down Testing - Stubs • User Interface Start here! • Underlying functionality may be simulated by stubs. • Do not perform any real computations or manipulate any real data. • Placeholders to be completed ‘later.’ • Work downwards, integrating lower and lower levels. • Big drawback: Cost of writing the stubs. • You may need to simulate ‘returns’ or ‘responses’ • This can be problematic. • But it is a good technique too. 20

  10. Bottom-Up testing - Drivers • Start at very lowest levels of the software. • Expense here: You needs drivers to test the lower layers. • Drivers are simple programs designed specifically for testing that make calls to the lower layers and/or provide parameters. • Drivers in bottom-up testing have a similar role to stubs in top-down testing, and are time-consuming to write. • Drivers may need to provide various parameters, temporary files, and may have to be coded to recognize the proper value / type of data returned! 20

  11. Sandwich Testing • Hybrid between bottom-up and top down testing. • Top down: Test the user interface in isolation, using stubs. • Test the verylowestlevelfunctions, using drivers. These may be utility functions or pre-existing functions that are being reused… • When the complete system is integrated, only the middle layer remains on which to perform the final set of tests. • Sandwich testing is very good especially when reusing existing logic. 20

  12. Vertical Strategies for Incremental Integration Testing What does all this mean to you? Explain. 20

  13. The “Test-Fix-Test” Cycle • When a failure occurs during testing: • Each failure report should enter into a failure tracking system. These are likely formalized. • Screened and assigned a Priority. • Known Bugs List: Low-priority failures might be put here. • May be included with the software’s release notes. • Synonyms: Some failure reports might be merged if they appear to result from the same defects. • Somebody/some office is assigned to investigate a failure. • That person/office tracks down the defect and fixes it. • Finally a new version of the system is created, ready to be tested again. • We will discuss in detail, as the above is quite simplistic and a number of important facts are not noted. 20

  14. The Ripple Effect – Regression Errors • High probability efforts to remove defects adds new defects • Howso? • The maintainer tries to fix problems without fully understanding the ramifications of the changes. • The maintainer makes ordinary human errors • The system regresses into a more and more failure-prone state • Maintenance programmer given insufficient time to properly analyze, repair, and adequately repair due to pressures. • All programmers are not created equal. • Regression Testing • Based on storing / documentation of previous tests. • Heuristic: put your sharpest new hire into Maintenance!!!! 20

  15. Regression Testing • Is far too expensive to re-run every single test case every time a change is made to software. • Only a subset of the previously-successful test cases is re-run. • This process is called regression testing. • The tests that are re-run are called regression tests. • Regression test cases - carefully selected to cover as much of the system as possible that can in any way have been affected by a maintenance change or any kind of change. • The “Law of Conservation of Bugs”: • The number of bugs remaining in a large system is proportional to the number of bugs already fixed 20

  16. Deciding When to Stop Testing • All of the level 1 test cases must have been successfully executed. • Pre-defined percentages of level 2 and level 3 test cases must have been executed successfully. • The targets must have been achieved and are maintained for at least two cycles of ‘builds’. • A build involves compiling / integrating all components. • Failure rates can fluctuate from build to build as: • Different sets of regression tests are run. • New defects are introduced. • But the notion of ‘stopping’ is quite real. • Discuss: Show stoppers!!!! 20

  17. The Roles of People Involved in Testing • The first pass of unit and integration testing is called developer testing. • Preliminary testing performed by the software developers who do the design and programming • Really, there is/are so many more ‘levels’ than this…Depends on the organization and how ‘they’ undertake testing strategies… • Depending on the architecture, there well may be subsystem testing, system testing, various kinds of integrated testing, etc. (Different shops will call these by different names – perhaps. • Independent testing is performed (often) by a separate group. • They do not have vested interest in seeing as many test cases pass as possible. • They develop specific expertise in how to do good testing, and how to use testing tools. • NOT all organizations have independent test groups! (Discuss) • Done after ‘unit’ testing. Perhaps for System Testing < Deployment • Can bet there is an independent test group in medium to large organizations. • Discuss: Quality Control Branch; Editors; Independent test groups: to whom do they report? etc. 20

  18. Testing Performed by Users and Clients • Alpha testing • Performed by the user or client, but under the supervision of the software development team. • Generally software is clearly not ready for release; maybe partial. • Maybe testing a particular major feature… • Beta testing • Performed by the user or client in a normal work environment. • May be run at client shop… • Recruited from the potential user population. • An open beta release is the release of low-quality software to the general population. (not too low now…) • Acceptance testing • Performed by users and customers. • However, the customers exercise the application on their own initiative often in real live situations, loading, etc.. • AGAIN, these definitions may vary from organization to organization. (My experience: Environmental Systems Test 1 and 2 and Operational Field Test (OFT)) 20

  19. Beta Testing • Discuss: Selection of beta-test sites • Sizes / Dependencies of shops • Small, Medium, Large • Local; Distributed; Platforms….. • Motivations for them / us • Early look at software • See about interfaces • Get a leg up on ‘market share’ • Ensure ‘their’ requirements are met • Costs! • Taken out of their hide! • Run applications in parallel? • More (may be cut a deal to serve as beta test site) 20

  20. Acceptance Testing • End-user • Test the ‘toot’ out of the application • Remember: to the end-user, the interface IS the • application. • Testing is usually Black Box • Functional; Can users use the application to perform their jobs: utility, usability, functionality, timing, response, … • More Black Box: • May include non-functional testing such as loading situations, performance, response times, transaction volumes, etc. • Despite all levels of testing, there will be bugs! • This is a fact of life. • Then, Maintenance begins! 20

More Related