370 likes | 466 Views
Part III: Execution – Based Verification and Validation. Katerina Goseva - Popstojanova Lane Department of Computer Science and Electrical Engineering West Virginia University, Morgantown, WV katerina@csee.wvu.edu www.csee.wvu.edu/~katerina. Outline. Introduction
E N D
Part III: Execution – Based Verification and Validation Katerina Goseva - Popstojanova Lane Department of Computer Science and Electrical Engineering West Virginia University, Morgantown, WV katerina@csee.wvu.edu www.csee.wvu.edu/~katerina
Outline • Introduction • Definitions, objectives and limitations • Testing principles • Testing criteria • Testing techniques • Black box testing • White box testing • Fault based testing • Mutation testing • Fault injection
Outline • Testing levels • Unit testing • Integration testing • Top-down • Bottom-up • Regression testing • Validation testing
Outline • Non-functional testing • Configuration testing • Recovery Testing • Safety testing • Security testing • Stress testing • Performance testing
Software Quality Assurance • Quality of software is the extent to which software satisfies its specifications – it is not “excellence” • Independence between development team and SQA group • Neither manager should be able to overrule the other • Does independent SQA group add considerably to the cost of software development?
Software Verification & Validation • Testing is an integral part of the software process and must be carried out throughout the life-cycle • Verification • Determine if the phase was completed correctly • Are we building the product right? • Validation • Determine if the product as a whole satisfies its requirements • Are we building the right product?
Cost of software life cycle phases • Requirements analysis 3% • Specification 3% • Design 5% • Coding 7% • Testing 15% • Maintenance 67%
Cost Requirements Implementation Deployment Cost of finding and fixing faults
Cost of finding and fixing faults • Changing a requirements document during its first review is inexpensive. It costs more when requirements change after code has been written: the code must be rewritten. • Fixing faults is much cheaper when programmers find their own errors. There is no communication cost. They don’t have to explain an error to anyone else. They don’t have to enter the fault into a fault tracking database. Testers and managers don’t have to review the fault status. The fault does not block or corrupt anyone else’s work. • Fixing a fault before releasing a program is much cheaper than maintenance or the consequences of failure.
Fault & Failure • Definitions • Failure– departure from the specified behavior • Fault– defect in the software that when executed under particular conditions causes a failure • What we observe during testing are failures • A failure may be caused by more than one fault • A fault may cause different failures
Execution-based testing • Two types of testing • Non-execution based (walkthroughs and inspections) • Execution based • Execution based testing is the process of inferring certain behavioral properties of product based, in part, on results of executing product in known environment with selected inputs
Test data and test cases • Test data - inputs which have been devised to test the system • Test cases - inputs to test the system and the predicted outputs from these inputs if the system operates according to its specification • All test cases must be • Planned beforehand, including expected output • Retained afterwards
Structure of the software test plan • Testing process – description of the major phases of the testing process • Requirements traceability – testing should be planned so that all requirements are individually tested • Tested items – products which are to be tested should be specified • Testing schedule – overall testing schedule and resource allocation for this schedule
Structure of the software test plan • Test recording procedures – results of the tests must be systematically recorded • Hardware and software requirements – software tools required and estimated hardware utilization • Constraints – constraints affecting the testing process such as staff shortages
Testing workbenches • Testing is expensive • Testing workbenches provide a range of tools to reduce the time required and total testing costs
Execution of tests Results Test cases Additional tests Suspected causes Debugging Regression Tests Corrections Identified causes Debugging • When software testing detects a failure, debugging is the process of detecting and fixing the fault
Simple non-software example • A lamp in my house does not work • If nothing in the house works, the cause must be in the main circuit breaker or outside • I look around to see whether the neighborhood is blacked out • I plug the suspect lamp into a different socket and a working appliance into the suspect circuit
Debugging approaches • Brute force – most common, least efficient • Backtracking – effective for small programs; beginning from the site where a symptom has been uncovered, the source code is traced backward (manually) until the site of the cause is found • Cause elimination – a list of all possible causes is developed and tests are conducted to eliminate each; if initial tests indicate that a particular cause shows promise, data are refined in an attempt to isolate the bug
What should be tested • Utility • Correctness • Robustness • Non-functional testing • Configuration testing • Recovery Testing • Safety testing • Security testing • Stress testing • Performance testing
Utility • Utility – the extend to which a user’s needs are met when a correct product is used under conditions permitted by its specifications • Does the product meet user’s needs? • Functionality • Ease of use • Cost-effectiveness
Correctness • A software product is correct if it satisfies its specification when operated under permitted conditions • What if the specifications themselves are incorrect?
Correctness of specifications • Specification for a sort • Function trickSort which satisfies this specification:
Correctness of specifications • Incorrect specification for a sort • Corrected specification for the sort
Correctness • Correctness of a product is meaningless if its specification is incorrect • Correctness is NOT sufficient
Robustness • A software product is robust if it works satisfactory on invalid inputs • Deliberately test the product on invalid inputs (error based testing)
Testing principles • All tests should be traceable to requirements definition and specification • Tests should be planned long before testing begins • The Pareto principle applies to software testing – 80% of all detected faults during testing will likely be traceable to 20% of program modules • Exhaustive testing is not possible • Number of possible input data is exceptionally large • Number of path permutations for even a moderately sized program is exceptionally large
Example A simple program like for i from 1 to 100 do print(if a[i]=true then 1 else 0 endif); has 2100 different outcomes; exhaustively testing this program would take 3 x 1014 years
Limitations of testing • Dijkstra, 1972 –“Testing can be very effective way to show the presence of faults, but it is hopelessly inadequate for showing their absence” • Faults are not detected during testing • Good news? • Bad news?
Test objective • Provoking failures and detecting faults • Increasing the confidence in failure-free behavior
Test adequacy criteria • Test adequacy criterion can be used as • Stopping rule – when a sufficient testing has been done? • Statement coverage criterion – stop testing if all statements have been executed • Measurement – mapping from the test set to the interval [0,1] • What is the percentage of statements executed • Test case generator • If a 100% statement coverage has not been achieved yet, select an additional test case that covers one or more statements yet untested
Test adequacy criteria • Test adequacy criteria are closely linked to test techniques • Coverage based adequacy criterion (e.g., statement coverage) does not help us in assessing whether all error prone points have been tested
Strategic issues • Specify product requirements in a quantifiable manner long before testing commences • State testing objectives explicitly (e.g., coverage, mean time to failure, the cost to find and fix faults) • Understand the users of software and develop a profile for each user category • Build robust software that is designed to test itself
Strategic issues • Use reviews (walkthroughs and inspections) as a filter prior to execution based testing • Develop a continuous improvement approach for testing process
Testing techniques • Black box testing, also called functional or specification based testing – test cases are derived from the specification, does not consider software structure • White box testing, also called structural or glass box testing – considers the internal software structure in the derivation of test cases; test adequacy criteria are specified in terms of the coverage (statements, branches, paths, etc.)
Testing techniques • Fault based techniques • Fault injection – artificially seed a number of faults in a software • Mutation testing – (large) number of variants (mutants) of a software is generated; each variant slightly differs from the original version • Experiments indicate that there is no “best” testing technique • Different techniques tend to reveal different types of faults • The use of multiple testing techniques results in the discovery of more faults