E N D
Chapter 2 Introduction and Examples
Universe of Discourse—Program Behaviors Universe of Discourse—Program Behaviors “Correct” region
Impossible to demonstrate A term from “classical” computer science “proofs” derived from code Not derived from specification Can only prove that the code does what it does! Better viewpoint: a relative term—program P is correct with respect to specification S. Bottom Line: do the specification and the program meet the customer/user's expectations? About Correctness...
Testing and Program Behaviors Spec-based testing Code-based testing
Synonyms—Metaphors gone wild • Specification-based testing • Functional testing • Black box testing
Test case identifier (usually a short name for test management purposes) Name Purpose (e.g., a business rule) Pre-conditions (if any) Inputs Expected outputs Observed outputs Pass/fail? Content of a Test Case
With respect to the Error, Fault, Failure, Incident framework, a test case is an experiment that... is designed to anticipate an error that is realized as a fault that causes a failure and is recognized by comparing expected and observed outputs. Testing as an Experiment
About Pseudo-code • Language neutral • Supports both procedural and object-oriented code • Easy to “translate” into your favorite programming language
Testing goal. The goal of testing is to build knowledge about a product by uncovering errors made when producing that product. The goal of testing is to build confidence in the software by conducting testing and fix things and eventually find fewer faults. If testing was exhaustively(on every input in input domain) without us finding any faults we would say the product is correct. This is not practically possible for more than trivial problems.
Testing versus Verification Goal of Verification: “Show there is no error for any input”, “prove correctness” Goal of testing “Uncover errors” “Build confidence in the software” https://www.youtube.com/watch?v=gp_D8r-2hwk Age of Conan(c) 2011
Errors and Faults An error(mistake) in thought or action may lead to a fault(defect, bug) that may lead to an observed behaviour, that is not correct. (according to some specification) -Mathur
Aim of writing tests We want to write tests in a way that we execute a program such that we find bugs. “Execute code with a fault in such a way that the bug infects the state to something we can eventually observe.”
Test case • System under Test • Input • Expected Output • Actual Output Observed Output • f(x, y) = add(x, y) • x = 3, y = 2 • f(3, 2) = 5 • f(3, 2) = 3 Observed = 3.0
Definitions Input domain “The set of all possible input to a program P is known as the input domain, or input space of P” -Mathur, p 12 Reliability (note that this is just one def of many) “The reliability of a program P is the probability of its successful execution on a randomly selected element from its input domain.” -Mathur, p 16 Correctness “A program is considered correct if it behaves as expected on each element of its input domain” - Mathur, p 13
Testing Types • Black-box testing • White-box testing • Model based testing • User Interface testing • Smoke testing • Random testing
Black box testing Examine functionality from the outside without knowing anything of the internals. Tests are generated from specification.
White (or. Glass) box testing Testing with knowledge of code internal workings. Behaviour Ex. coverage. Tests are generated from source code.
Model based testing "Mbt-overview". Licensed under Public Domain via Wikipedia - https://en.wikipedia.org/wiki/File:Mbt-overview.png#/media/File:Mbt-overview.png
Smoke and Random Testing Built on that the system may crash when in a faulty state. May be designed to crash when in a faulty state. Eg by adding: “In code asserts”
Smoke testing Exceptions Error messages Log-messages Output: -1 ... Run as large part of the system as possible, look for smoke! Asserts No output 100 % CPU, no stop
Static and dynamic Qualities Static: (offline) • Structured, Maintainable Code • Testable code • Compiles • Correct and complete documentation • Code review Dynamic: (When program is running) • Reliability • Completeness (Availability of features) • Consistency (Eg. same design) • Usability • Performance Mathur p 9
Levels of Testing Activities • Unit Testing, when coding • Integration Testing, during integration • System Testing, during system integration • Regression Testing, after change • Beta-Testing, Pre release delivery • Acceptance Testing, during delivery Mathur
Tasks: Open Lecture by James Bach (Exploratory) https://www.youtube.com/watch?v=ILkT_HV9DVU#t=14 Agile Testing, Google Tech Talks (2007) https://www.youtube.com/watch?v=bqrOnIECCSg