540 likes | 775 Views
Advanced Software Engineering: Software Testing COMP 3702. Instructor: Anneliese Andrews. News & Project. News Updated course program Reading instructions The book, deadline 23/3 Project IMPORTANT to read the project description thoroughly Schedule, Deadlines , Activities
E N D
Advanced Software Engineering: Software TestingCOMP 3702 Instructor:Anneliese Andrews
News & Project • News • Updated course program • Reading instructions • The book, deadline 23/3 • Project • IMPORTANT to read the project description thoroughly • Schedule, Deadlines, Activities • Requirements (7-10 papers), project areas • Report, template, presentation A Andrews - Software Engineering: Software Testing'06
Lecture • Chapter 4 (Lab 1) • Black-box testing techniques • Chapter 12 (Lab 2) • Statistical testing • Usage modelling • Reliability A Andrews - Software Engineering: Software Testing'06
Why test techniques? • Exhaustive testing (use of all possible inputs and conditions) is impractical • must use a subset of all possible test cases • want must have high probability of detecting faults • Need processes that help us selecting test cases • Different people – equal probability to detect faults • Effective testing – detect more faults • Focus attention on specific types of fault • Know you’re testing the right thing • Efficient testing – detect faults with less effort • Avoid duplication • Systematic techniques are measurable A Andrews - Software Engineering: Software Testing'06
Dimensions of testing • Testing combines techniques that focus on • Testers – who does the testing • Coverage – what gets tested • Potential problems – why you're testing (risks / quality) • Activities – how you test • Evaluation – how to tell whether the test passed or failed • All testing should involve all five dimensions • Testing standards (e.g. IEEE) A Andrews - Software Engineering: Software Testing'06
Black-box testing A Andrews - Software Engineering: Software Testing'06
Equivalence partitioning mouse picks on menu Partitioning is based on input conditions output format requests user queries responses to prompts numerical data command key input A Andrews - Software Engineering: Software Testing'06
Equivalence partitioning Input condition: • is a range • one valid and two invalid classes are defined • requires a specific value • one valid and two invalid classes are defined • is a boolean • one valid and one invalid class are defined A Andrews - Software Engineering: Software Testing'06
Test Cases • Which test cases have the best chance of successfully uncovering faults? • as near to the mid-point of the partition as possible • the boundaries of the partition and • Mid-point of a partition typically represents the “typical values” • Boundary values represent the atypical or unusual values • Usually identify equivalence partitions based on specs and experience A Andrews - Software Engineering: Software Testing'06
Equivalence Partitioning Example • Consider a system specification which states that a program will accept between 4 and 10 input values (inclusive), where the input values must be 5 digit integers greater than or equal to 10000 • What are the equivalence partitions? A Andrews - Software Engineering: Software Testing'06
Example Equivalence Partitions A Andrews - Software Engineering: Software Testing'06
mouse picks on menu output format requests user queries responses to prompts numerical data command key input Boundary value analysis output domain A Andrews - Software Engineering: Software Testing'06
Boundary value analysis • Range a..b a, b, just above a, just below b • Number of values: max, min, just below min, just above max • Output bounds should be checked • Boundaries of externally visible data structures shall be checked (e.g. arrays) A Andrews - Software Engineering: Software Testing'06
Some other black-box techniques • Risk-based testing, random testing • Stress testing, performance testing • Cause-and-effect graphing • State-transition testing A Andrews - Software Engineering: Software Testing'06
Error guessing • Exploratory testing, happy testing, ... • Always worth including • Can detect some failures that systematic techniques miss • Consider • Past failures (fault models) • Intuition • Experience • Brain storming • ”What is the craziest thing we can do?” • Lists in literature A Andrews - Software Engineering: Software Testing'06
Characteristics Accessibility Responsiveness Efficiency Comprehensibility Environments Free form tasks Procedure scripts Paper screens Mock-ups Field trial Usability testing A Andrews - Software Engineering: Software Testing'06
Specification-based testing • Formal method • Test cases derived from a (formal) specification (requirements or design) Model (state chart) Specification Test case generation Test execution A Andrews - Software Engineering: Software Testing'06
Model Usage Requirements Model-based Testing VALIDATION Specification Top-levelDesign Integration Detailed Design Unit Test Test phase Coding A Andrews - Software Engineering: Software Testing'06
Statistical testing /Usage based testing A Andrews - Software Engineering: Software Testing'06
Usage specification models A Andrews - Software Engineering: Software Testing'06
Usage specification models A Andrews - Software Engineering: Software Testing'06
Operational profiles A Andrews - Software Engineering: Software Testing'06
Operational profiles A Andrews - Software Engineering: Software Testing'06
Code Statistical testing / Usage-based testing Usage model Random sample A Andrews - Software Engineering: Software Testing'06
Each transition corresponds to an external event Probabilities are set according to the future use of the system Reliability prediction Usage Modelling Invoke Click on OK with non-valid hour Right-click Move Dialog Box Main Window Resize CANCEL or OK with Valid Hour Close Window Terminate A Andrews - Software Engineering: Software Testing'06
P41 P21 N1 P12 N2 P24 P31 P14 P13 P34 N4 N3 Markov model • System states, seen as nodes • Probabilities of transitions Conditions for a Markov model: • Probabilities are constants • No memory of past states A Andrews - Software Engineering: Software Testing'06
F Model of a program • The program is seen as a graph • One entry node (invoke) and one exit node (terminate) • Every transition from node Ni to node Nj has a probability of Pij • If no connection between Ni and Nj, then Pij= 0 P21 N1 P12 N2 Input P24 P31 P14 P13 Output P34 N4 N3 A Andrews - Software Engineering: Software Testing'06
Clock Software A Andrews - Software Engineering: Software Testing'06
Input Domain – Subpopulations • Human users – keystrokes, mouse clicks • System clock – time/date input • Combination usage - time/date changes from the OS while the clock is executing • Create one Markov chain to model the input from the user A Andrews - Software Engineering: Software Testing'06
Operation modes of the clock • Window = {main window, change window, info window} • Setting = {analog, digital} • Display = {all, clock only} • Cursor = {time, date, none} A Andrews - Software Engineering: Software Testing'06
State of the system • A state of the system under test is an element of the set S, where S is the cross product of the operational modes. • States of the clock {main window, analog, all, none} {main window, analog, clock-only, none} {main window, digital, all, none} {main window, digital, clock-only, none} {change window, analog, all, time} {change window, analog, all, date} {change window, digital, all, time} {change window, digital, all, date} {info window, analog, all, none} {info window, digital, all, none} A Andrews - Software Engineering: Software Testing'06
Top Level Markov Chain Window operational mode is chosen as the primary modeling mode Rules for Markov chainsEach arc is assigned a probability between 0 and 1 inclusive,The sum of the exit arc probabilities from each state is exactly 1. A Andrews - Software Engineering: Software Testing'06
Top Level Model – Data Dictionary A Andrews - Software Engineering: Software Testing'06
Level 2 Markov Chain Submodel for the Main Window A Andrews - Software Engineering: Software Testing'06
Data Dictionary – Level 2 A Andrews - Software Engineering: Software Testing'06
Data Dictionary – Level 2 A Andrews - Software Engineering: Software Testing'06
Level 2 Markov Chain Submodel for the Change Window A Andrews - Software Engineering: Software Testing'06
Data Dictionary A Andrews - Software Engineering: Software Testing'06
Software Reliability • Techniques • Markov models • Reliability growth models A Andrews - Software Engineering: Software Testing'06
Dimensions of dependability A Andrews - Software Engineering: Software Testing'06
Costs of increasing dependability C o s t Dependability L o w M e d i u m H i g h V e r y U l t r a - h i g h h i g h A Andrews - Software Engineering: Software Testing'06
Availability and reliability • Reliability • The probability of failure-free system operation over a specified time in a given environment for a given purpose • Availability • The probability that a system, at a point in time, will be operational and able to deliver the requested services Both of these attributes can be expressed quantitatively A Andrews - Software Engineering: Software Testing'06
Reliability terminology A Andrews - Software Engineering: Software Testing'06
Usage profiles / Reliability Removing X% of the faults in a system will not necessarily improve the reliability by X%! A Andrews - Software Engineering: Software Testing'06
Reliability achievement • Fault avoidance • Minimise the possibility of mistakes • Trap mistakes • Fault detection and removal • Increase the probability of detecting and correcting faults • Fault tolerance • Run-time techniques A Andrews - Software Engineering: Software Testing'06
Reliability quantities • Execution time • is the CPU time that is actually spent by the computer in executing the software • Calendar time • is the time people normally experience in terms of years, months, weeks, etc • Clock time • is the elapsed time from start to end of computer execution in running the software A Andrews - Software Engineering: Software Testing'06
Reliability metrics A Andrews - Software Engineering: Software Testing'06
Nonhomogeneous Poisson Process (NHPP) Models • N(t) follows a Poisson distribution. The probability that N(t) is a given integer n is: • m(t) = (t) is called mean value function, it describes the expected cumulative number of failures in [0,t) A Andrews - Software Engineering: Software Testing'06
The Goel-Okumoto (GO) model Assumptions • The cumulative number of failures detected at time t follows a Poisson distribution • All failures are independent and have the same chance of being detected • All detected faults are removed immediately and no new faults are introduced • The failure process is modelled by an NHPP model with mean value function (t) given by: A Andrews - Software Engineering: Software Testing'06
Goel-Okumoto The shape of the mean value function ((t)) and the intensity function ((t)) of the GO model (t) (t) t A Andrews - Software Engineering: Software Testing'06