2.66k likes | 2.8k Views
Rob Oshana Southern Methodist University. Software Testing. Why do we Test ?. Assess Reliability Detect Code Faults. Industry facts. 30-40% of errors detected after deployment are run-time errors [U.C. Berkeley, IBM’s TJ Watson Lab].
E N D
Rob Oshana Southern Methodist University Software Testing
Why do we Test ? • Assess Reliability • Detect Code Faults
Industry facts • 30-40% of errors detected after deployment are run-time errors [U.C. Berkeley, IBM’s TJ Watson Lab] • The amount of software in a typical device doubles every 18 months [Reme Bourguignon, VP of Philips Holland] • Defect densities are stable over the last 20 years : 0.5 - 2.0 sw failures / 1000 lines [Cigital Corporation] • Software testing accounts for 50% of pre-release costs,and 70% of post-release costs [Cigital Corporation]
Critical SW Applications Critical software applications which have failed : • Mariner 1 NASA 1962Missing ‘-’ in ForTran code Rocket bound for Venus destroyed • Therac 25 Atomic Energy of Canada Ltd 1985-87Data conversion error Radiation therapy machine for cancer • Long Distance Service AT&T 1990A single line of bad code Service outages up to nine hours long • Patriot Missiles U.S. military 1991Endurance errors in tracking system 28 US soldiers killed in barracks • Tax Calculation Program InTuit 1995Incorrect results SW vendor payed tax penalties for users
Good and successful testing • What is a good test case? • A good test case has a high probability of finding an as-yet undiscovered error • What is a successful test case? • A successful test is one that uncovers an as-yet undiscovered error
developer independenttester Who tests the software better ? Must learn about the system, but, will attempt to break it and, is driven by quality Understands the system but, will test “gently” and, is driven by “delivery”
Testability – can you develop a program for testability? • Operability - “The better it works, the more efficiently it can be tested” • Observability - the results are easy to see, distinct output is generated for each input, incorrect output is easily identified • Controllability - processing can be controlled, tests can be automated & reproduced • Decomposability - software modules can be tested independently • Simplicity - no complex architecture and logic • Stability - few changes are requested during testing • Understandability - program is easy to understand
Did You Know... • Testing/Debugging can worsen reliability? • We often chase the wrong bugs? • Testing cannot show the absence of faults, only the existence? • The cost to develop software is directly proportional to the cost of testing? • Y2K testing cost $600 billion
Did you also know... • The most commonly applied software testing techniques (black box and white box) were developed back in the 1960’s • Most Oracles are human (error prone)!! • 70% of safety critical code can be exceptions • this is the last code written!
Testing Problems • Time • Faults hides from tests • Test Management costs • Training Personnel • What techniques to use • Books and education
“Errors are more common, more pervasive, and more troublesome in software than with other technologies” David Parnas
What is testing? • How does testing software compare with testing students?
What is testing? • “Software testing is the process of comparing the invisible to the ambiguous as to avoid the unthinkable.” James Bach, Borland corp.
What is testing? • Software testing is the process of predicting the behavior of a product and comparing that prediction to the actual results." R. Vanderwall
Purpose of testing • Build confidence in the product • Judge the quality of the product • Find bugs
Finding bugs can be difficult A path through the mine field (use case) A path through the mine field (use case) Mine field x x x x x x x x x x x
Why is testing important? • Therac25: Cost 6 lives • Ariane 5 Rocket: Cost $500M • Denver Airport: Cost $360M • Mars missions, orbital explorer & polar lander: Cost $300M
Reasons for customer reported bugs • User executed untested code • Order in which statements were executed in actual use different from that during testing • User applied a combination of untested input values • User’s operating environment was never tested
Interfaces to your software • Human interfaces • Software interfaces (APIs) • File system interfaces • Communication interfaces • Physical devices (device drivers) • controllers
Selecting test scenarios • Execution path criteria (control) • Statement coverage • Branching coverage • Data flow • Initialize each data structure • Use each data structure • Operational profile • Statistical sampling….
What is a bug? • Error: mistake made in translation or interpretation ( many taxonomies exist to describe errors) • Fault: manifestation of the error in implementation (very nebulous) • Failure: observable deviation in behavior of the system
Example • Requirement: “print the speed, defined as distance divided by time” • Code: s = d/t; print s
Example • Error; I forgot to account for t = 0 • Fault: omission of code to catch t=0 • Failure: exception is thrown
Severity taxonomy • Mild - trivial • Annoying - minor • Serious - major • Catastrophic - Critical • Infectious - run for the hills What is your taxonomy ? IEEE 1044-1993
Life cycle Testing and repair process can be just as error prone as the development Process (more so ??) Errors can be introduced at each of these stages Requirements Resolve error error Design Isolate error error Code Classify error error Testing error
Ok, so lets just design our systems with “testability” in mind…..
Testability • How easily a computer program can be tested (Bach) • We can relate this to “design for testability” techniques applied in hardware systems
JTAG A standard Integrated Circuit Boundary Scan cells Boundary Scan path Data out Core IC Logic cell TDI TDO I/O pads Data in Test data out (TDO) Test data in (TDI) Test access port controller Test mode Select (TMS) Test clock (TCK)
Operability • “The better it works, the more efficiently it can be tested” • System has few bugs (bugs add analysis and reporting overhead) • No bugs block execution of tests • Product evolves in functional stages (simultaneous development and testing)
Observability • “What you see is what you get” • Distinct output is generated for each input • System states and variables are visible and queriable during execution • Past system states are ….. (transaction logs) • All factors affecting output are visible
Observability • Incorrect output is easily identified • Internal errors are automatically detected through self-testing mechanisms • Internal errors are automatically reported • Source code is accessible
Visibility Spectrum End customer visibility Factory visibility GPP visibility DSP visibility
Controllability • “The better we can control the software, the more the testing can be automated and optimized” • All possible outputs can be generated through some combination of input • All code is executable through some combination of input
Controllability • SW and HW states and variables can be controlled directly by the test engineer • Input and output formats are consistent and structured
Decomposability • “By controlling the scope of testing, we can more quickly isolate problems and perform smarter testing” • The software system is built from independent modules • Software modules can be tested independently
Simplicity • “The less there is to test, the more quickly we can test it” • Functional simplicity (feature set is minimum necessary to meet requirements) • Structural simplicity (architecture is modularized) • Code simplicity (coding standards)
Stability • “The fewer the changes, the fewer the disruptions to testing” • Changes to the software are infrequent, controlled, and do not invalidate existing tests • Software recovers well from failures
Understandability • “The more information we have, the smarter we will test” • Design is well understood • Dependencies between external, internal, and shared components are well understood • Technical documentation is accessible, well organized, specific and detailed, and accurate
“Bugs lurk in corners and congregate at boundaries” Boris Beizer
Types of errors • What is a Testing error? • Claiming behavior is erroneous when it is in fact correct • ‘fixing’ this type of error actually breaks the product
Errors in classification • What is a Classification error ? • Classifying the error into the wrong category • Why is this bad ? • This puts you on the wrong path for a solution
Example Bug Report • “Screen locks up for 10 seconds after ‘submit’ button is pressed” • Classification 1: Usability Error • Solution may be to catch user events and present an hour-glass icon • Classification 2: Performance error • solution may be a modification to a sort algorithm (or visa-versa)
Isolation error • Incorrectly isolating the erroneous modules • Example: consider a client server architecture. An improperly formed client request results in an improperly formed server response • The isolation determined (incorrectly) that the server was at fault and was changed • Resulted in regression failure for other clients
Resolve errors • Modifications to remediate the failure are themselves erroneous • Example: Fixing one fault may introduce another
What is the ideal test case? • Run one test whose output is "Modify line n of module i." • Run one test whose output is "Input Vector v produces the wrong output" • Run one test whose output is "The program has a bug" (Useless, we know this)
More realistic test case • One input vector and expected output vector • A collection of these make of a Test Suite • Typical (naïve) Test Case • Type or select a few inputs and observe output • Inputs not selected systematically • Outputs not predicted in advance
Test case definition • A test case consists of; • an input vector • a set of environmental conditions • an expected output. • A test suite is a set of test cases chosen to meet some criteria (e.g. Regression) • A test set is any set of test cases