1 / 86

Introduction to V&V Techniques and Principles

Introduction to V&V Techniques and Principles. Software Testing and Verification Lecture 2. Prepared by Stephen M. Thebaut, Ph.D. University of Florida. Verification and Validation…. …at the unit/component-level of system development, is what this course is mostly about.

jwhitely
Download Presentation

Introduction to V&V Techniques and Principles

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to V&V Techniques and Principles Software Testing and Verification Lecture 2 Prepared by Stephen M. Thebaut, Ph.D. University of Florida

  2. Verification and Validation… • …at the unit/component-level of system development, is what this course is mostly about. • Adds value...by improving product quality and reducing risks. • Three approaches: • Human-based testing • Formal correctness proofs • Machine-based testing

  3. Verification and Validation… • …at the unit/component-level of system development, is what this course is mostly about. • Adds value...by improving product quality and reducing risks. • Three approaches: • Human-based testing • Formal correctness proofs • Machine-based testing

  4. Verification and Validation… • …at the unit/component-level of system development, is what this course is mostly about. • Adds value...by improving product quality and reducing risks. • Three approaches: • Human-based testing • Formal correctness proofs • Machine-based testing

  5. Human-Based Testing • Desk Checking, Walkthroughs, Reviews / Inspections • Applicable to requirements / specifications, design, code, proof of correctness arguments, test plans / designs / cases, maintenance plans, etc. • Can be extremely effective...

  6. Human-Based Testing • Desk Checking, Walkthroughs, Reviews / Inspections • Applicable to requirements / specifications, design, code, proof of correctness arguments, test plans / designs / cases, maintenance plans, etc. • Can be extremely effective...

  7. Human-Based Testing • Desk Checking, Walkthroughs, Reviews / Inspections • Applicable to requirements / specifications, design, code, proof of correctness arguments, test plans / designs / cases, maintenance plans, etc. • Can be extremely effective...

  8. Formal Correctness Proofs “pre-condition” {X≥2} SP2 := 4 while SP2 <= X SP2 := SP2 * 2 end_while { ( i  SP2 = 2i > X Л 2(i-1) X) Л (X = X’) } loop “invariant” { ( i  SP2 = 2iЛ 2(i-1) X) Л (X = X’) } “post-condition”

  9. Formal Correctness Proofs (cont’d) • Require formal specificationsand appropriate proof methods. • Do noteliminate the need for testing.

  10. Formal Correctness Proofs (cont’d) • Require formal specificationsand appropriate proof methods. • Do noteliminate the need for testing.

  11. Machine-Based Testing • Executionof (“crafted”) test cases • Actualand expectedresults (i.e., program behaviors) are compared.

  12. Machine-Based Testing • Executionof (“crafted”) test cases • Actualand expectedresults (i.e., program behaviors) are compared.

  13. Definitions of “TESTING” • Hetzel: Any activity aimed at evaluating an attribute or capability of a program or system. It is the measurement of software quality. • Beizer: The act of executing tests. Tests are designed and then executed to demonstrate the correspondence between an element and its specification.

  14. Definitions of “TESTING” • Hetzel: Any activity aimed at evaluating an attribute or capability of a program or system. It is the measurement of software quality. • Beizer: The act of executing tests. Tests are designed and then executed to demonstrate the correspondence between an element and its specification.

  15. Definitions of “TESTING” (cont’d) • Myers: The process of executing a program with the intent of finding errors. • IEEE: The process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements or to identify differences between expected and actual results.

  16. Definitions of “TESTING” (cont’d) • Myers: The process of executing a program with the intent of finding errors. • IEEE: The process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements or to identify differences between expected and actual results.

  17. Definitions of “TESTING” (cont’d) • Testing undertaken to demonstrate that a system performs correctly is sometimes referred to as validation testing. • Testing undertaken to expose defects is sometimes referred to as defect testing.

  18. Definitions of “TESTING” (cont’d) • Testing undertaken to demonstrate that a system performs correctly is sometimes referred to as validation testing. • Testing undertaken to expose defects is sometimes referred to as defect testing.

  19. Evolving Attitudes About Testing • 1950’s • Machine languages used • Testing is debugging • 1960’s • Compilers developed • Testing is separate from debugging

  20. Evolving Attitudes About Testing (cont’d) • 1970’s • Software engineering concepts introduced • Testing begins to evolve as a technical discipline • 1980’s • CASE tools developed • Testing expands to Verification and Validation (V&V)

  21. Evolving Attitudes About Testing (cont’d) • 1990’s • Increased focus on shorter development cycles • Quality focus increases • Testing skills and knowledge in greater demand • Increased acceptance of testing as a discipline

  22. Evolving Attitudes About Testing (cont’d) • 2000’s • More focus on shorter development cycles • Increased use of Agile methods: Test-First / Test-Driven Development, customer involvement in testing, and the use of automated testing frameworks (e.g., JUnit) • Better integration of testing / verification / reliability ideas • Growing interest in software safety, protection, and security

  23. Let’s Pause for a Moment… Imagine that it’s summertime and that a 3-day weekend is just starting… Wouldn’t it be great to just grab a fishin’ pole and head on out to the lake!

  24. Fisherman’s Dilemma • You have 3 days for fishing and 2 lakes from which to choose. Day 1 at lake X nets 8 fish. Day 2 at lake Y nets 32 fish. Which lake do you return to for day 3? • Does your answer depend on any assumptions?

  25. Fisherman’s Dilemma • You have 3 days for fishing and 2 lakes from which to choose. Day 1 at lake X nets 8 fish. Day 2 at lake Y nets 32 fish. Which lake do you return to for day 3? • Does your answer depend on any assumptions?

  26. Di Lemma • In general, the probability of the existence of more errors in a section of a program is directly related to the number of errors already found in that section.

  27. Invalid and Unexpected Inputs • Test cases must be written for INVALID and UNEXPECTED, as well as valid and expected, input conditions. • In many systems, MOST of the code is concerned with input error checking and handling.

  28. Invalid and Unexpected Inputs • Test cases must be written for INVALID and UNEXPECTED, as well as valid and expected, input conditions. • In many systems, MOST of the code is concerned with input error checking and handling.

  29. Anatomy of a Test Case • What are the parts of a test case? • a description of input condition(s) • a description of expected results • Where do ‘‘expected results’’ come from?

  30. Who Should Test Your Program? • Most people are inclined to defend what they produce – not find fault with it. • Thus, programmers should in principle avoid testing their own programs. • But what if this is not possible or appropriate?

  31. Who Should Test Your Program? • Most people are inclined to defend what they produce – not find fault with it. • Thus, programmers should in principle avoid testing their own programs. • But what if this is not possible or appropriate?

  32. Who Should Test Your Program? • Most people are inclined to defend what they produce – not find fault with it. • Thus, programmers should in principle avoid testing their own programs. • But what if this is not possible or appropriate?

  33. Who Should Test Your Program? • Most people are inclined to defend what they produce – not find fault with it. • Thus, programmers should in principle avoid testing their own programs. • But what if this is not possible or appropriate? Become Mr. Hyde... I.e., adopt a “tester’s mindset” that mitigates your ego-attachment to the program.

  34. Testing Techniques • Black-Box: Testing based solely on analysis of requirements (unit/component specification, user documentation, etc.). Also know as functional testing. • White-Box: Testing based on analysis of internal logic (design, code, etc.). (But expected results still come from requirements.) Also known as structural testing.

  35. Testing Techniques • Black-Box: Testing based solely on analysis of requirements (unit/component specification, user documentation, etc.). Also know as functional testing. • White-Box: Testing based on analysis of internal logic (design, code, etc.). (But expected results still come from requirements.) Also known as structural testing.

  36. Levels or Phases of Testing • Unit: testing of the smallest programmer work assignments that can reasonably be planned and tracked (e.g., function, procedure, module, object class, etc.) • Component: testing a collection of units that make up a component (e.g., program, package, task, interacting object classes, etc.)

  37. Levels or Phases of Testing • Unit: testing of the smallest programmer work assignments that can reasonably be planned and tracked (e.g., function, procedure, module, object class, etc.) • Component: testing a collection of units that make up a component (e.g., program, package, task, interacting object classes, etc.)

  38. Levels or Phases of Testing (cont’d) • Product: testing a collection of components that make up a product (e.g., subsystem, application, etc.) • System: testing a collection of products that make up a deliverable system

  39. Levels or Phases of Testing (cont’d) • Product: testing a collection of components that make up a product (e.g., subsystem, application, etc.) • System: testing a collection of products that make up a deliverable system

  40. Levels or Phases of Testing (cont’d) • Testing usually: • begins with functional (black-box) tests, • is supplemented by structural (white-box) tests, and • progresses from the unit level toward the system level with one or more integration steps.

  41. Other Types of Testing • Integration: testing which takes place as sub-elements are combined (i.e., integrated) to form higher-level elements • Regression: re-testing to detect problems caused by the adverse effects of program change • Acceptance: formal testing conducted to enable the customer to determine whether or not to accept the system (acceptance criteria may be defined in a contract)

  42. Other Types of Testing • Integration: testing which takes place as sub-elements are combined (i.e., integrated) to form higher-level elements • Regression:re-testing to detect problems caused by the adverse effects of program change • Acceptance: formal testing conducted to enable the customer to determine whether or not to accept the system (acceptance criteria may be defined in a contract)

  43. Other Types of Testing • Integration: testing which takes place as sub-elements are combined (i.e., integrated) to form higher-level elements • Regression:re-testing to detect problems caused by the adverse effects of program change • Acceptance: formal testing conducted to enable the customer to determine whether or not to accept the system (acceptance criteria may be defined in a contract)

  44. Other Types of Testing (cont’d) • Alpha: actual end-user testing performed within the development environment • Beta: end-user testing performed within the user environment prior to general release • System Test Acceptance: testing conducted to ensure that a system is “ready” for the system-level test phase

  45. Other Types of Testing (cont’d) • Alpha: actual end-user testing performed within the development environment • Beta: end-user testing performed within the user environment prior to general release • System Test Acceptance: testing conducted to ensure that a system is “ready” for the system-level test phase

  46. Other Types of Testing (cont’d) • Alpha: actual end-user testing performed within the development environment • Beta: end-user testing performed within the user environment prior to general release • System Test Acceptance: testing conducted to ensure that a system is “ready” for the system-level test phase

  47. Other Types of Testing (cont’d) • Soak: testing a system version over a significant period of time to discover latent errors or performance problems (due to memory leaks, buffer/file overflow, etc.) • Smoke (build verification): the first test after a software build to detect catastrophic failure (Term comes from hardware testing…) • Lights out: testing conducted without human intervention – e.g., after normal working hours

  48. Other Types of Testing (cont’d) • Soak: testing a system version over a significant period of time to discover latent errors or performance problems (due to memory leaks, buffer/file overflow, etc.) • Smoke (build verification): the first test after a software build to detect catastrophic failure (Term comes from hardware testing…) • Lights out: testing conducted without human intervention – e.g., after normal working hours

  49. Other Types of Testing (cont’d) • Soak: testing a system version over a significant period of time to discover latent errors or performance problems (due to memory leaks, buffer/file overflow, etc.) • Smoke (build verification): the first test after a software build to detect catastrophic failure (Term comes from hardware testing…) • Lights out: testing conducted without human intervention – e.g., after normal working hours

  50. Testing in Plan-Driven Software Development

More Related