1 / 22

Verification and Validation

Verification and Validation. Reference: Software Engineering , Ian Sommerville, 6th edition, Chapter 19. Objectives. To introduce software verification and validation and to discuss the distinction between them To describe the difference between static and dynamic V & V

afric
Download Presentation

Verification and Validation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Verification and Validation Reference: Software Engineering, Ian Sommerville, 6th edition, Chapter 19

  2. Objectives • To introduce software verification and validation and to discuss the distinction between them • To describe the difference between static and dynamic V & V • To describe the program inspection process and its role in V & V • To explain static analysis as a verification technique

  3. Verification vs. Validation • Verification: "Are we building the product right?" • The software should conform to its specification. • Validation: "Are we building the right product?" • The software should do what the user really requires.

  4. The V & V Process • Is a whole life-cycle process - V & V must be applied at each stage in the software process. • Example: Peer document reviews • Has two principal objectives • The discovery of defects in a system • The assessment of whether or not the system is usable in an operational situation

  5. V & V Goals • Verification and validation should establish confidence that the software is fit for its purpose. • This does NOT mean completely free of defects. • Rather, it must be good enough for its intended use. The type of use will determine the degree of confidence that is needed.

  6. V & V Confidence • Depends on the system’s purpose, user expectations, and marketing environment • System purpose • The level of confidence depends on how critical the software is to an organization (e.g., safety critical). • User expectations • Users may have low expectations of certain kinds of software • Marketing environment • Getting a product to market early may be more important than finding defects in the program

  7. Static vs. Dynamic V & V • Code and document inspections - Concerned with the analysis of the static system representation to discover problems(static v & v) • May be supplement by tool-based document and code analysis • Software testing - Concerned with exercising and observing product behavior (dynamic v & v) • The system is executed with test data and its operational behavior is observed

  8. Code Inspections (Static V & V) • Involves examining the source code with the aim of discovering anomalies and defects • Defects may be logical errors, anomalies in the code that might indicate an erroneous condition (e.g., an uninitialized variable), or non-compliance with coding standards. • Intended for defect detection,not correction • Very effective technique for discovering errors • Saves time and money – the earlier in the development process an error is found, the better

  9. Inspection Success • Many different defects may be discovered in a single inspection, whereas with testing one defect may mask another so that several executions/tests are required. • Inspections reuse domain and programming knowledge so reviewers are likely to have seen the types of errors that commonly arise.

  10. Inspections and Testing • Inspections and testing are complementary and not opposing verification techniques. • Inspections can check conformance with a specification, but not conformance with the customer’s real requirements. • Inspections cannot check non-functional characteristics such as performance, usability, etc.

  11. Inspection Preparation • A precise specification must be available. • Team members must be familiar with the organization’s standards. • Syntactically correct code must be available. • An error checklist should be prepared. • Management must accept that inspection will increase costs early in the software process.

  12. Ethics Question • A manager decides to use the reports of program inspections as an input to the staff appraisal process. These reports show who made and who discovered program errors. • Is this ethical management behavior? • Would it be ethical if the staff were informed in advance that this would happen? • What difference might it make in the inspection process?

  13. Inspection Procedure • The inspection procedure is planned. • A system overview is presented to the inspection team. • Code and associated documents are distributed to inspection team in advance. • Inspection takes place and discovered errors are noted. • Modifications are made to repair discovered errors. • Re-inspection may or may not be required.

  14. Inspection Teams • Made up of: • Author of the code being inspected • Inspector who finds errors, omissions, and inconsistencies • Reader who reads the code to the team • Moderator who chairs the meeting • Scribe who makes detailed notes regarding errors • Roles may vary from these (e.g., Reader). • Multiple roles may be taken on by the same member.

  15. Inspection Checklist • Checklist of common errors should be used to drive the inspection • Error checklist is programming language dependent • The “weaker” the language type checking, the larger the checklist (e.g., C vs. Java) • Examples: variable initializations, constant naming, loop termination, array bounds, etc.

  16. Inspection checks

  17. Inspection Rate Measurements at IBM by M. E. Fagan: • 500 statements/hour during overview • 125 source statements/hour during individual preparation • 90-125 statements/hour can be inspected • Inspection is, therefore, an expensive process • Inspecting 500 lines costs about 40 person-hours effort

  18. Automated Static Analysis • Static analyzers are software tools for source text processing. • They parse the program text and try to discover potentially erroneous conditions. • Very effective as an aid to inspections. A supplement to, but not a replacement for, inspections.

  19. Static Analysis Checks

  20. Stages of Static Analysis • Control flow analysis. Checks for loops with multiple exit or entry points, finds unreachable code, etc. • Data use analysis. Detects uninitialized variables, variables assigned twice without an intervening use, variables that are declared but never used, etc. • Interface analysis. Checks the consistency of procedure declarations and their use

  21. Stages of Static Analysis (con’t) • Information flow analysis. Identifies the dependencies of output variables. Does not detect anomalies itself but highlights information for code inspection or review • Path analysis. Identifies paths through the program and sets out the statements executed in that path. Potentially useful in the inspection and testing processes. • Both of these stages generate vast amounts of information. Must be used with care.

  22. 138% more lint_ex.c #include <stdio.h> printarray (Anarray) int Anarray; { printf(“%d”,Anarray); } main () { int Anarray[5]; int i; char c; printarray (Anarray, i, c); printarray (Anarray) ; } 139% cc lint_ex.c 140% lint lint_ex.c lint_ex.c(10): warning: c may be used before set lint_ex.c(10): warning: i may be used before set printarray: variable # of args. lint_ex.c(4) :: lint_ex.c(10) printarray, arg. 1 used inconsistently lint_ex.c(4) :: lint_ex.c(10) printarray, arg. 1 used inconsistently lint_ex.c(4) :: lint_ex.c(11) printf returns value which is always ignored LINT static analysis

More Related