230 likes | 442 Views
DKT311 Software Engineering. Dr. Rozmie Razif bin Othman. Software Testing. Why Testing a Software?. Error, Defect and Failure … What’s the difference?. Objectives of Testing. Finding defects Gaining confidence about the level of quality Providing information for decision-making
E N D
DKT311 Software Engineering Dr.RozmieRazif bin Othman
Objectives of Testing • Finding defects • Gaining confidence about the level of quality • Providing information for decision-making • Preventing defects
7 Testing Principles • Testing shows presence of defects Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness. • Exhaustive testing is impossible Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts. • Early testing To find defects early, testing activities shall be started as early as possible in the software or system development life cycle, and shall be focused on defined objectives • Defect clustering Testing effort shall be focused proportionally to the expected and later observed defect density of modules. A small number of modules usually contains most of the defects discovered during release testing, or is responsible for most of the operational failures.
7 Testing Principles • Pesticide paradox If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new defects. To overcome this “pesticide paradox”, test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or systems to find potentially more defects. • Testing is context dependent Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site. • Absence-of-errors fallacy Finding and fixing defects does not help if the system built is unusable and does not fulfil the users needs and expectations.
V V-Model (Sequential Development Model) User Acceptance Testing Business Requirement System Testing System Requirement Integration Testing High Level Definition Unit Testing Low Level Definition Code
Component Testing (Unit Testing) • Searches for defects in, and verifies the functioning of, software modules, programs, objects, classes, etc. • Using stubs or drivers if necessary. • May include testing of functional or non-functional requirements. • Tester have access to the code (or test carried out by developer/programmer) with the support of development environment. • One approach in component testing is to prepare and automate test cases before coding (example tool that can be used is JUnit).
Integration Testing • Testing interface between component. Interactions with different parts of a system, such as the operating system, the file system and hardware, and interfaces between the system. • May be more than one level of integration testing • Component integration testing – Interaction between component done after component testing. • System integration testing – Interaction between different system or between hardware and software and may be done after system testing. Cross-platform issues may be significant.
System Testing • Concern with the behaviour of a whole system/product. • Test environment should correspond to the final target or production environment as much as possible in order to minimize the risk of environment-specific failures not being found in testing. • System testing includes:- • Tests based on risks and/or on requirement specifications. • Business process • Use Cases • High level text descriptions • Models of system behaviour.
Specification-based or Black-box Technique • Equivalent Partitioning • Inputs to the software or system are divided into groups (partitions) that are expected to exhibit similar behaviour. • Need to consider both valid and invalid input data. • Tests can be designed to cover all partitions • Applicable for all level of testing • Boundary Value Analysis • Behaviour at the edge of partition is more likely to be incorrect than behaviour within the partition. • When designing test cases, a test for each boundary value is chosen.
Example You are testing a credit-card only, unattended gasoline pump. Once credit card is validated, the customer selected the desired grade, and the pump is ready to pump, the customer may cancel the transaction and owe nothing; however, once the pumping starts, gasoline will be sold in hundredths (0.01) of a gallon. The pump continues to pump gasoline until the user stops or a maximum of 50.00 gallons has been dispersed. • Derive the test input using Equivalent Partitioning technique. • Derive the test input using Boundary Value Analysis.
Solution Partition Cannot be sold 1 Partition Cannot be sold 2 Partition Can be sold ….. ….. ….. 0.00 0.01 50.00 50.01 Equivalent Partition Technique:- Select one value from each partition Partition Cannot be sold 1 (e.g. - 2.50) Partition Can be sold (e.g. 10.00) Partition Cannot be sold 2 (e.g. 75.00) Test input using Equivalent Partition Technique are – 2.50, 10.00 and 75.00 gallon
Solution 1st Boundary 2nd Boundary ….. ….. ….. 0.00 0.01 50.00 50.01 Boundary value analysis:- Choose 2 values (right next to the left and right) for every boundary For the 1st boundary is 0.00 and 0.01 For the 2nd boundary is 50.00 and 50.01 Test input using Boundary value analysis are 0.00, 0.01, 50.00 and 50.01 gallon
Specification-based or Black-box Technique • Decision Table Testing • Used when the requirements contain logical conditions. • Decision tables are a precise yet compact way to model complicated logic. Decision tables, like if-then-else and switch-case statements, associate conditions with actions to perform. • But, unlike the control structures found in traditional programming languages, decision tables can associate many independent conditions with several actions in an elegant way. • State Transition Testing • Self-study • Use Case Testing • Self-study
Structure-based or White-box Testing • Statement Testing and Coverage • Is the assessment of the percentage of executable statements that have been exercised by a test suite. • Test cases are designed to increase the statement coverage (to 100% coverage) • Decision Testing and Coverage • Related to branch testing • Stronger that statement coverage. (i.e. 100% decision coverage guarantees 100% statement coverage BUT NOT vice versa)
Experience-based Testing • Tests are derived from the tester’s skill and intuition and their experience with similar applications or technologies. • Useful in identifying special tests which is not easily captured by formal technique. • Most common approach call error guessing – testers anticipate defects based on experience. • More systematic approach is to list all possible defects and to design tests to attack these defects. This technique called fault attack. • Exploratory testing:- • “A style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.” (CemKaner, 1983)