1 / 17

Testing

Testing. Paul Sorenson Department of Computing Science University of Alberta. CMPUT 401 Software Engineering. What is software testing?. GlenfordMyers [The Art of Software Testing, John Wiley & Sons, 1979] says its:. Process of executing a program with the intent of finding errors.

Download Presentation

Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Testing Paul Sorenson Department of Computing Science University of Alberta CMPUT 401 Software Engineering Introduction to Testing

  2. What is software testing? GlenfordMyers [The Art of Software Testing, John Wiley & Sons, 1979] says its: Process of executing a program with the intent of finding errors. This makes it a challenging task…. Why? • it’s not easy to find errors in [large] software programs • it’s a destructive activity - your purpose is to find faults • this can be demoralizing and unrewarding if not treated positively Introduction to Testing

  3. 1 2 #paths=4+1 3 4 Loop <=20 times Can we test until all the errors are gone? For some types of small programs - YES For large programs or programs with extensive loops - NO. There are 520=1014 possible paths. It would take over three years to test this program if each test took as little as 1 millisec to perform Introduction to Testing

  4. Characteristics of Good Testing • Good testing finds bugs • Good testing should be based on requirements • Testing can only show the presence of bugs, it can never show their absence. • Testing is best conducted by independent testers and not the people who wrote the code. This has been verified in research studies. Why is this the case? Introduction to Testing

  5. Testers Developers are too close to the code and “understand” it too well. They are also often driven by delivery schedules and are busy fixing found bugs. Independent testers will try to break the code and should be driven by quality concerns. A disadvantage is that it will take them some additional time to learn the system. Introduction to Testing

  6. Types of Testing Unit testing - testing at the basic unit level (e.g., module or method). Two types: functional (black box) or structural (white box). Integration testing - testing combinations of units up to a subsystem level using scaffolding techniques and/or procedure or method stubs as appropriate. Acceptance testing - testing driven by user requirements that can usually be linked to use cases developed as part of the requirements definition process. When the entire system is tested inside the system development organization environment with the user present it is also called alpha testing. Introduction to Testing

  7. Types of Testing (cont.) Beta testing - a product or system is tested independent of the development organization, usually at selected customer sites. Stress testing - involves testing for certain “non-functional” characteristics such as performance, load, security, multi-platforms, etc. Regression testing - testing after a system has undergone change due to bug fixes or the introduction of some new functionality. It is not sufficient just to test those parts of the system that have changed. All components interacting with changed components must be re-tested in regression testing. Introduction to Testing

  8. Acceptance Testing • A process that a client uses to (weakly) verify that the delivered and installed product is ready to be put into production use. • From the user's point of view, this means that every user oriented function (i.e., all important use cases) should be thoroughly exercised. • From a user interface perspective, include boundary cases where appropriate, such as typing an entry in a data field which is longer than the space allocated for it in the user interface. • Define and then follow an acceptance testing process. Introduction to Testing

  9. Acceptance Testing Process • Begin by developing a plan -- this plan will later be realized as a script for testers to follow. • Identify the test cases and classify them according to levels of significance. Three such levels are: • Major - the system will not function properly if this test fails. • Minor - the system will be usable, but certain features may be impaired or require work-arounds. • Diagnostic - the test is designed to give information about a problem to the developers. • Develop a script based on the plan in which all major tests are conducted before minor and diagnostic tests. Introduction to Testing

  10. Test Case Should contain the following information: Subsystem Name:If the system is large and has several subsystems it is often convenient to group tests by subsystem Test Name:Provide a meaningful name. Test Level:One of {major, minor, diagnostic} Test Details: Pre-conditions:conditions that must hold before the test is run Procedure:actual steps to be followed (some steps can be executed automatically using scripting languages). Include any input parameters. Expected Behaviour:expected outcomes of test Minor deviations: state any known deviations that might occur for the test case (for example, because of the version of the operating system being used). Introduction to Testing

  11. Example Test Case Subsystem Name: NIDS Test Name: NIDS startup Test Level: Major Test Details: Pre-conditions:no other NIDS must be up and running on the default Procedure:start the NIDS as below Expected Behaviour: % nids nids started at Thu Mar 2 11:15:11 MST 1998 LOG nids started at Thu Mar 2 11:15:11 MST 1998 LOG nids is listening to port 5000 LOG Listening for connection 0 Minor deviations: the port number will be the NIDS default (this should be recorded for other tests), the date and time should be current. Introduction to Testing

  12. Acceptance Testing Scripts • Typically acceptance testing scripts are hierarchically organized by subsystem and function. • The top level gives the overall plan for sequencing the tests. It should indicate what tests can be done in parallel, and what results must be achieved in order to proceed to the next tests. A test can have three results: • passed (P) - the test is passed • reservations (R) - the test is passed except for minor deviations • that do not seriously affect functionality • failed (F) - the test failed. • The minimum acceptable result for each test is specified, and the step in the plan proceeds if all tests achieve the minimum standard. Introduction to Testing

  13. Example Test Script Step 1 - Status: incomplete Date: 1998 Mar 01 Duration: .5hr Subsystem: NIDS Objective: Verify major use cases Tests: Test Name Result (Min Required/Got) NIDS startup P/P (note1) NIDS shutdown P/P NIDS add P/R (note 2) NIDS remove P/P NIDS update P/P Notes: 1. NIDS add did not permit passwords with leading number. 2. stopped testing, sent in bug report re add. Introduction to Testing

  14. Example Test Script (cont.) Status: Date: Duration: Subsystem: GT Objective: Establish minimal functionality prior to detailed tests. Tests: Test Name Result (Min Required/Got) GT two party connect P GT two party text R Notes: Step 2 - Introduction to Testing

  15. Example Test Script (cont.) Schedule Note: the next two steps can be done in parallel. Step 3.2 - Step 3.1 - Status: Date: Duration: Subsystem: GT Objective: Verify major graphics functionality. Tests: Test Name Result (Min Required/Got) Sub-plan GT-graphics. P Notes: Status: Date: Duration: Subsystem: NIDS Objective: Verify NIDS password processing. Tests: Test Name Result (Min Required/Got) NIDS passwords P Notes: Introduction to Testing

  16. Term Project Acceptance Testing The procedure for acceptance testing is as follows: 1. Each team will take delivery of another team's product. They will unpack the product, install it, and perform the supplied acceptance tests. NOTE: You better create installation documentation! 2. If problems are encountered during this process, the client team will follow the problem reporting procedures as given in the deliverable. If the supplier is unresponsive, the acceptance tester can reject the product. 3. The team can then subject the product to their own acceptance tests, which should focus on the key claimed features of the product. Introduction to Testing

  17. Term Project Acceptance Testing 4. You must maintain a complete log of your acceptance testing activities, including any communication you had with the supplier team. This log is submitted as part of the acceptance testing report. 5. The results of the testing should be recorded in a brief report, with a clearly indicated decision of: accept - the product passes all acceptance tests and has high quality according to the tester's standards, accept with reservations - the product fails a few minor tests or has quality problems (e.g., inaccurate or incomplete documentation), or reject - the product fails one or more major tests. 6. You will be required to demonstrate your knowledge of the other team’s project and justify your acceptance report during the acceptance test demonstrations to be held on the last week of classes (Dec. 8-9). At this time you must also provide the instructor with a copy of your acceptance report. Introduction to Testing

More Related