1 / 33

Part III: Execution – Based Verification and Validation

Part III: Execution – Based Verification and Validation. Katerina Goseva - Popstojanova Lane Department of Computer Science and Electrical Engineering West Virginia University, Morgantown, WV katerina@csee.wvu.edu www.csee.wvu.edu/~katerina. Introduction

Download Presentation

Part III: Execution – Based Verification and Validation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Part III: Execution – Based Verification and Validation Katerina Goseva - Popstojanova Lane Department of Computer Science and Electrical Engineering West Virginia University, Morgantown, WV katerina@csee.wvu.edu www.csee.wvu.edu/~katerina

  2. Introduction Definitions, objectives and limitations Testing principles Testing criteria Testing techniques Black box testing White box testing Fault based testing Fault injection Mutation testing Outline

  3. Testing levels Unit testing Integration testing Top-down Bottom-up Sandwich Regression testing Validation testing Acceptance testing Alpha and beta testing Non-functional testing Configuration testing Recovery Testing Safety testing Security testing Stress testing Performance testing Outline

  4. Unit testing • Usually the responsibility of the developer (except sometimes for critical systems) • Testing of individual program units (modules) • Interface • Ensure that information properly flows into and out of the unit under test • Local data structures • Exercise local data structures • Ascertain local impact on global data

  5. Unit testing • Boundary testing • Ensure that the module operates properly at boundaries • White-box testing • Control and data flow coverage • Error handling paths

  6. Unit testing • Because a module is not a standalone program, driver and/or stub software must be developed for each unit test • Driver – “main program” that accepts test case data, passes such data to the module to be tested, and prints relevant results • Stubs – “dummy modules” that replace modules that are called by the module to be tested • Creation of the drivers and stubs is overhead in the testing process

  7. Driver Module to be tested Stub Stub Test cases Results Unit testing environment

  8. Integration testing • Testing a complete system or subsystems composed of integrated modules • Integration testing is the responsibility of an independent SQA team • Integration testing is mainly black-box testing with tests derived from the specification

  9. Integration testing • Big bang approach – combine the entire program and test as a whole • Usually results in a chaos • Incremental approach – the program is constructed and tested in small segments • Faults are easier to isolate and correct

  10. Incremental integration testing • Top-down testing • Start with high-level system and integrate from the top-down replacing individual components by stubs where appropriate • Bottom-up testing • Integrate individual components in levels until the complete system is created • In practice, most integration involves a combination of these strategies, so called sandwich testing

  11. Top-down testing • The main control module is used as a test driver • Stubs are substituted for all modules directly subordinated to the main control module • Subordinated stubs are replaced one at a time with actual modules

  12. Top-down testing - Myths • Most faults are related to control problems • NOT TRUE; at best 15% of the faults are related to control problems • Many of these will be corrected at the lower level • Control problems eliminated by pure top-down integration testing are only a few percent • Complexity decreases uniformly from top to down • NOT TRUE; complexity typically rises to a peak at about three or four levels down the calling tree and then decreases from there on

  13. Top-down testing - Myths • The system as a whole is tree like • NOT TRUE; real system may have multiple executive structures and may use elements in a complex way • There may be no single top-down path for testing • Testing with stubs is easier than with real routines • Only if the stubs are much simpler than the real modules • The system is the best test driver • Partially true; why not use good test driver tools if the real system is not available?

  14. Top-down testing • Advantage – architecture and major control flow are verified early in the test process • Problems occur when processing at low levels is required to adequately test upper levels • Stubs replace lower lever modules at the beginning of the top-down design and no significant data can flow upward in the program structure • Choices • Delay many tests until stubs are replaced with actual modules • Develop stubs that perform limited functions that simulate the actual module • Integrate the software bottom - up

  15. Bottom-up testing • Low level modules are combined into clusters (sometimes called builds) that perform • a specific software subfunction • A driver (a control program for testing) is written to coordinate test case input and output • The cluster is tested • Drivers are removed and clusters are combined moving upward in the program structure

  16. Bottom-up testing - Myths • If the units are thoroughly tested and carefully integrated, then so must the whole • NOT TRUE; bridges rarely fall down because of bad steel, but because of bad architecture • Complexity increases from bottom to top • NOT TRUE • Once the fault is corrected at the lower level, it remains corrected • NOT TRUE in general; Correction may be proven wrong • Test drivers are easily build • Not really easier than stubs

  17. Bottom-up testing • Advantage • allows maximum amount of flexibility in scheduling and parallel testing • it makes incorporating preexisting or modified preexisting code relatively easy • Disadvantages • Fundamental architectural faults and control faults are unlikely to be discovered until much of the systems has been tested • Correction of these faults might involve rewriting and re-testing of lower level modules in the system

  18. Sandwich testing • Combination of top-down and bottom-up • Top two levels of the program structure are integrated top-down and tested with stubs • The rest is integrated bottom-up • Number of drivers is reduced substantially and the integration of clusters is greatly simplified

  19. Regression testing • Each time a new module is added as part of integration testing, the software changes • New data flow paths are established • New I/O may occur • New control flow is invoked • Corrective modifications - each time a fault is corrected, software changes (integration testing and maintenance)

  20. Regression testing • Adaptive modifications – performed to ensure compatibility with the environment (maintenance) • Perfective modifications - performed to add new features to the program to improve functionality or performance (maintenance)

  21. Regression testing • These changes may cause problems with functions that previously worked flawlessly • Regression testing is the re-execution of a subset of test cases that have already been conducted to ensure that changes do not introduce unintended behavior or additional faults

  22. Regression testing outline • Change request • software needs to be changed to correct errors or to meet the new requirements(adaptive or perfective modifications) • Software modification • fix detected faults or made the changes to meet the new requirements • Test case selection • select test cases that test new changes; insure that test cases will exercise the modified portions of the program • select a subset of test cases that have already been conducted to re-execute; ensure that changes do not introduce unintended behavior or additional faults

  23. Regression testing outline • Test execution • Execute selected test cases; this step is usually automated since the number of test cases may be large • Failure identification, debugging, and correction • If the modified program does not behave as expected, identify and correct faults

  24. Regression testing - Test case selection • Retest all • Time consuming and expensive • Modified programs may require new test cases • Some of the existing test cases may become obsolete for the modified program • Selective retest • Identify the need for new test cases • Select those test cases that are relevant to the modified program

  25. Critical modules • A critical module has one or more of the following characteristics • addresses several software requirements • has a high level of control (resides relatively high in the program structure) • is complex or error prone (cyclomatic complexity may be used as an indicator) • has definite performance requirements • Critical modules should be tested as early as possible; in addition regression tests should focus on critical modules

  26. Validation testing • Validation testing is achieved through a series of black-box test cases that demonstrate the conformity with requirements (functional and non-functional) • After each validation test case has been conducted, one of two possible conditions exist • The functional or non-functional characteristic (e.g., performance) conform to specification and is accepted • A deviation from specification is uncovered and a deficiency list is created; at this stage deficiencies (i.e., faults) can rarely be corrected prior to scheduled delivery

  27. Validation testing • Acceptance testing – validation of the requirements • When the software is build for one customer • Conducted by the end user • Alpha and beta testing • When the software is used by many customers • Alpha testing is conducted at the developer’s site in controlled environment by a customer • Beta testing is conducted at one or more customers sites; developer is generally not present

  28. Cost per fault detected Many Few Very few Number of residual faults Fault avoidance

  29. Fault avoidance • There must be a precise (preferably formal) system specification that defines the system to be implemented • The organization developing software must have an organizational quality culture where quality is the driver of the software process • A development process should be defined and developers trained in the application of this process

  30. Fault avoidance • An approach to software design and implementation based on information hiding and encapsulation should be used • Development of programs that are designed for readability and understandability should be encouraged • Strongly typed programming languages (such as Java and Ada) should be used • In a language of strong typing, many faults can be detected by the compiler

  31. Fault avoidance • The use of programming constructs that are potentially error-prone should be avoided wherever possible • Dijkstra (1968) recognized that goto statement is makes localization of state changes difficult – adoption structured programming was an important milestone • Floating – point numbers are inherently imprecise; representation imprecision may lead to invalid comparasions

  32. Fault avoidance • Pointers are low level construct that refer directly to areas in machine memory. They make bound checking harder to implement • Dynamic memory allocation - memory may be allocated at run time rather than compile time. It may happen that the memory is not de-allocated so that the system eventually runs out of available memory (memory leak) • Parallelism makes it difficult to predict and test subtle effect of timing interactions between parallel processes; it should be carefully controlled to minimize inter-process dependencies

  33. Fault avoidance • Recursion – following the logic of recursive programs may be difficult; may result in the allocation of all the system’s memory as temporary stack variables are created • Interrupts – force control to transfer irrespective of code currently executing; it could happen that a critical operation is terminated • Inheritance – supports reuse and problem decomposition. However, it makes more difficult to understand the behavior of the object

More Related