410 likes | 557 Views
ICS 52: Introduction to Software Engineering. Lecture Notes for Summer Quarter, 2003 Michele Rousseau Topic 11
E N D
ICS 52: Introduction to Software Engineering Lecture Notes for Summer Quarter, 2003 Michele Rousseau Topic 11 Partially based on lecture notes written by Sommerville, Frost, Van Der Hoek, Taylor & Tonne. Duplication of course material for any commercial purpose without the written permission of the lecturers is prohibited
Today’s Lecture • Quality assurance • An introduction to testing
ICS 52 Life Cycle Design phase Verify Requirements phase Verify Implementation phase Test Testing phase Verify
Implementation/Testing Interaction Implementation (previous lecture) Testing (this lecture)
The Seriousness of the problem… • Mars Pathfinder – Metric or English system • Audi 5000 – auto accelerate – feature or fault? • Mariner 1 launch – veered off course • AT&T telephone network - down for 9 hours • Ariane 5 • Pentium – FPU error • X-ray machine – over-radiation • LAS
Impact of Failures • Not just “out there” • Mars Pathfinder • Mariner 1 • Ariane 5 • But also “at home” • Your car • Your call to your mom • Your homework • Your hospital visit Peter Neumann’s Risks Forum: http://catless.ncl.ac.uk/Risks
Quality Assurance • What qualities do we want to assure? • Correctness (most important?) • How to assure correctness? • By running tests • How else? • Can qualities other than correctness be “assured” ? • How is testing done? • When is testing done? • Who tests? • What are the problems?
Software Qualities Correctness Reliability Robustness Performance Usability Verifiability Maintainability Repairability Safety Evolvability Reusability Portability Survivability Understandability We want to show relevant qualities exist
Quality Assurance • Assure that each of the software qualities is met • Goals set in requirements specification • Goals realized in implementation • Sometimes easy, sometimes difficult • Portability versus safety • Sometimes immediate, sometimes delayed • Understandability versus evolvability • Sometimes provable, sometimes doubtful • Size versus correctness
Verification and Validation • Verification “Are we building the product right?” (Boehm) • The Software should conform to its specification • testing, reviews, walk-throughs, inspections • internal consistency; consistency with previous step • Validation “Are we building the right product?” • The software should do what the user really requires • ascertaining software meets customer’s intent • Correctness has no meaning independent of specifications
Problem #1:Eliciting the Customer’s Intent Real needs Actual Specs “Correct” Specs No matter how sophisticated the QA process is, there is still the problem of creating the initial specification
Problem #2: QA is tough • Complex data communications • Electronic fund transfer • Distributed processing • Web search engine • Stringent performance objectives • Air traffic control system • Complex processing • Medical diagnosis system Sometimes, the software system is extremelycomplicated making it tremendously difficult to perform QA
Problem #3: Management Aspects of QA • Who does what part of the testing? • QA (Quality Assurance) team? • Are developers involved? • How independent is the independent testing group? • What happens when bugs are found? • What is the reward structure? Project Management ? QA Group Development Group ?
Problem #4: QA vs Developers • Quality assurance lays out the rules • You will check in your code every day • You will comment your code • You will… • Quality assurance also uncovers the faults • Taps developers on their fingers • Creates image of “competition” • Quality assurance is viewed as cumbersome • “Just let me code” • What about rewards? Quality assurance has a negative connotation
Problem #5: Can’t test exhaustively loop < 20x There are 1014 possible paths! If we execute one test per millisecond, it would take 3.170 years to test this program!! Out of question
Simple Example: A 32-Bit Multiplier • Input: 2 32-bit integers • Output: the 64-bit product of the inputs • Testing hardware: checks one billion products per second (or roughly one check per 2-30 seconds) • How long to check all possible products? 2642-30 = 234 seconds 512 years • What if the implementation is based on table lookups? • How would you know that the spec is correct?
An Idealized View of QA Complete formal specsof problem to be solved Correctness-preserving transformation Design, in formal notation Correctness-preserving transformation Code, in verifiable language Correctness-preserving transformation executable machine code Correctness-preserving transformation Execution on verified hardware
A Realistic View of QA Mixture of formal and informal specifications Manual transformation Design, in mixed notation Manual transformation Code, in C++, Ada, Java, … Compilation by commercial compiler Pentium machine code Commercial firmware Execution on commercial hardware
The V & V process • Is a whole life-cycle process - V & V must be applied at each stage in the software process. • Has two principal objectives • The discovery of defects in a system • The assessment of whether or not the system is usable in an operational situation.
Static and dynamic verification • Software inspections Concerned with analysis of the static system representation to discover problems (static verification) • May be supplement by tool-based document and code analysis • Software testing Concerned with exercising and observing product behaviour (dynamic verification) • The system is executed with test data and its operational behaviour is observed
V & V confidence • Depends on system’s purpose, user expectations and marketing environment • Software function • The level of confidence depends on how critical the software is to an organisation • User expectations • Users may have low expectations of certain kinds of software • Marketing environment • Getting a product to market early may be more important than finding defects in the program
V & V planning • Careful Planning is essential • Start Early – remember the V model • Perpetual Testing • Balance static verification and testing • Define standards for the testing process rather than describing product tests
Static Analysis • Software Inspection • Examine the source representation with the aim of discovering anomalies and defects • May be used before implementation • May be applied to any representation of the system (requirements, design, test data, etc.) • Very effective technique for discovering errors
Inspection success • Many different defects may be discovered in a single inspection. • In testing, one defect ,may mask another so several executions are required • They reuse domain and programming knowledge so reviewers are likely to have seen the types of error that commonly arise
Inspections and testing • Inspections and testing are complementary and not opposing verification techniques • Both should be used during the V & V process • Inspections can check conformance with a specification • Can’t check conformance with the customer’s real requirements • Cannot validate dynamic behaviour • Inspections cannot check non-functional characteristics such as performance, usability, etc.
Inspections and testing • Inspections and testing are complementary and not opposing verification techniques • Both should be used during the V & V process • Inspections can check conformance with a specification but not conformance with the customer’s real requirements • Inspections cannot check non-functional characteristics such as performance, usability, etc.
Testing • The only validation technique for non-functional requirements • Should be used in conjunction with static verification to provide full V&V coverage “Program testing can be used to show the presence of bugs, but never to show their absence.” — E. W. Dijkstra
What is Testing • Exercising a module, collection of modules, or system • Use predetermined inputs (“test case”) • Capture actual outputs • Compare actual outputs to expected outputs • Actual outputs equal to expected outputs test case succeeds • Actual outputs unequal to expected outputs test case fails
Limits of software testing • “Good” testing will find bugs • “Good” testing is based on requirements,i.e. testing tries to find differences between the expected and the observed behavior of systems or their components • V&Vshould establish confidence that the software is fit for purpose • BUT remember: Testing can only prove the presence of bugs - never their absence – can’t prove it is defect free • Rather, it must be good enough for its intended use and the type of use will determine the degree of confidence that is needed
Testing Terminology • Failure: Incorrect or unexpected output, based on specifications • Symptom of a fault • Fault: Invalid execution state • Symptom of an error • May or may not produce a failure • Error: Defect or anomaly or “bug” in source code • May or may not produce a fault
Testing and debugging • Defect testing & debugging –Different processes • V&V -establishes existence of defects in a program • Debugging – locate and repair
Testing Goals • Reveal failures/faults/errors • Locate failures/faults/errors • Show system correctness • Improve confidence that the system performs as specified (verification) • Improve confidence that the system performs as desired (validation) • Desired Qualities: • Accurate • Complete / thorough • Repeatable • Systematic
Test Tasks • Devise test cases • Target specific areas of the system • Create specific inputs • Create expected outputs • Choose test cases • Not all need to be run all the time • Regression testing • Run test cases • Can be labor intensive All in a systematic, repeatable, and accurate manner
Levels of Testing • Unit/component testing: testing of code unit (subprogram, class, method/function, small subsystem) • Often requires use of test drivers • Integration testing: testing of interfaces between units • Incremental or “big bang” approach? • Often requires drivers and stubs • System or acceptance testing: testing complete system for satisfaction of requirements • often performed by user / customer
What is the problem we need to address? • Want to verify software --> • Need to test --> • Need to decide on test cases --> • But, no set of test cases guarantees absence of bugs, • What is a systematic approach to the selection of test cases that will lead to the accurate, acceptably thorough, and repeatable identification of errors, faults, and failures? So,
Two Approaches • White box (or Glass Box) testing • Structural testing • Test cases designed, selected, and ran based on structure of the code • Scale: tests the nitty-gritty • Drawbacks: need access to source code • Black box testing • Specification-based testing • Test cases designed, selected, and ran based on specifications • Scale: tests the overall system behavior • Drawback: less systematic
Test Oracles • Provide a mechanism for deciding whether a test case execution succeeds or fails • Critical to testing • Used in white box testing • Used in black box testing • Difficult to automate • Typically relies on humans • Typically relies on human intuition • Formal specifications may help
Example • Your test shows cos(0.5) = 0.8775825619 • You have to decide whether this answer is correct? • You need an oracle • Draw a triangle and measure the sides • Look up cosine of 0.5 in a book • Compute the value using Taylor series expansion • Check the answer with your desk calculator
Use the Principles • Rigor and formality • Separation of concerns • Modularity • Abstraction • Anticipation of change • Generality • Incrementality