450 likes | 610 Views
Qualitätssicherung von Software (SWQS). Prof. Dr. Holger Schlingloff Humboldt-Universität zu Berlin und Fraunhofer FOKUS. 14.5.2013: Modellbasierter Test (Doppelstunde). Fragen zur Wiederholung. Vorgehensweisen beim Integrationstest? Was ist ein Testorakel? Ziele des Systemtests?
E N D
Qualitätssicherung von Software (SWQS) Prof. Dr. Holger Schlingloff Humboldt-Universität zu Berlin und Fraunhofer FOKUS 14.5.2013: Modellbasierter Test (Doppelstunde)
Fragen zur Wiederholung • Vorgehensweisen beim Integrationstest? • Was ist ein Testorakel? • Ziele des Systemtests? • Relevanz des Abnahmetests?
Wo stehen wir? Kapitel 2. Softwaretest 2.1 Testen im SW-Lebenszyklus 2.2 Funktionale Tests, Modultests, Testfallauswahl 2.3 Strukturelle Tests, Integrationstests 2.4 Modellbasierte Tests Kapitel 1: Einleitung, Begriffe, Software-Qualitätskriterien
Modellbasierter Test • Folien aus verschiedenen internationalen Sommerschulen • Buchkapitel in Vorbereitung
Specification-based Testing • Constructing the test suite from the specification • as opposed to constructing it from the implementation(= code-based testing) • code-based testing cannot detect missing requirements • specification-based testing cannot detect additional (unspecified) features specifiedbehaviour implementedbehaviour testedbehaviour
The Validation Triangle Specification Refinement Test generation System development Test development Test execution System Testsuite
Model-based Design and Model-based Testing • Often: same syntax, different pragmatics • e.g. test cases can be formulated in Java • e.g. system spec can be formulated with LTL Requirements System Spec Test Spec Test Model System Model System Impl. Test Cases
An embedded-systems example A video camcorder • owner's manual almost incomprehensible • can be found in the internet • typical for such devices • Multifunctional on-off switch: • up: off • down: cycles through "tape", "memory" and "play/edit" mode • Intuitively, tape mode is for video, memory mode for pictures and play mode for viewing recorded material
off tape play dn up up up dn dn memory dn Transition system • How can we formally describe the behaviour of this switch? (Natural language description is ambivalent: What does "cycle" mean? What if I push dn-dn-up-dn?) • Modelling by finite transition system: • States: {off, tape, memory, play} • Transition relations: {up, dn}
play tape camera switch on but_hi but_lo off dn memory dn,pwr_res dn up up,pwr_fail dn dn on dn • Every model abstracts from details (e.g., there is a little green button within the switch which arrests it in the "off" position) • A model is a means of communication between humans (designers, users, ...) • Structuring the model as parallel hierarchical transition system gives a statechart / state machine diagram
UML State Machines • UML state machines are basically (extended) LTSs with parallelism and hierarchy • Each state machine consists of at least one region that contains states and transitions
UML State Machines • UML state machine: set of regions • Region: contains vertices and transitions • Vertex: state, pseudostate, connection point • State: simple or composite (containing regions) • Pseudostate: initial state, fork state • Transition: connects source and target vertex • Trigger: event (msg rec, op exec) • Guard: OCL condition • Effect: assignment, triggering an event, condition
UML State Machines – States • A state models a situation during which (usually implicit) invariant conditions hold • e.g. waiting for an event to occur • e.g. performing some behavior • Associated with each state may be • entry, do and exit actions • constraints (=state invariants) • Pseudostates • initial, history, fork, join, junction, choice, entry, exit, terminate, (final)
UML State Machines – Transitions • A transition is a directed relationship between a source vertex and a target vertex • Labels consist of Trigger [Guard] / Action where • trigger is a transition label (i/o-event) • guard is a logical formula on internal variables • action is an update of the variables • “Run-to-completion” semantics • if an action is also a trigger, it will be processed before the next external trigger is taken into consideration • maintaining an “event pool” during execution
System models vs. Test models • Such models can help in the development of complex systems • The more concrete the formalism, the closer it is to an implementation • executable code may be generated from state diagrams • We might add additional information such as timing, communication, variables and such. • Test model as opposed to implementation model describes properties of the targeted system • not aiming at a complete description of the system • not aiming at the generation of executable code
The State of an Object • Technical systems convert or relocate physical objects (matter and/or energy) • Physical objects are characterized by their state • State = observable appearance of an object in space and time • „a complete description of a system in terms of parameters such as positions and momentums at a particular moment in time” (wiki) • shape, size, position, movement, temperature, pressure, voltage, … • Observation of physical state by sensors • camera, folding rule, light sensor, tachometer, thermometer, … • Modification of physical state by actuators • motor, valve, relais, transducer, heater, …
Technical Systems and Processes • Technical system: perform technical process • Technical process: reshaping or transporting physical objects • Description of states by state variables • formally, a state is a mapping of variables to values • Description of processes by state changes • discrete state changes are called events • continuously changing state constituents are sometimes called signals
Example: A Kitchen Toaster • A toaster • what is the technical process? • what are the states, events and signals of the (technical) process? • what are the boundaries of the system? • which information processing is to be done? • what are the interfaces between technical system and information processing component?
Modeling the Toaster • User Interfaces: turning knob, side lever, stop button • Internals: heating element, retainer latch • Extra: defrost button • First approach: timing is neglected (timer event) • Advanced approach: timing depends on various parameters
Test Generation • Graph traversal yields abstract test cases • Dijkstra shortest path algorithm • Depth-first and breadth-first search • Static analysis for input parameters • Category partition table • Classification tree method • Boundary value analysis • Abstract interpretation
Exercise • Construct sometest cases for thetoaster withvariables!
Automated Test Generation Tools • More than a dozen commercial and experimental research tools available • Usually quite costly (>10K€ per license) • Mostly from UML State Machines • Comparison e.g. by mutation analysis
ParTeG – Partition Test Generator • Since we didn’t like the pricing, and wanted to experiment with different technologies, a Ph.D. student built his own… • http://parteg.sourceforge.net/ • UML class & state transition diagrams, connected by OCL • Plugin for Eclipse, supports XMI import / export
Test Generation from State Machines • Define a test case to be any execution path • How to generate such paths? How many paths to generate? When to stop testing? Coverage criteria e.g. • all-states • all-transitions • all-transition-pairs • all-n-paths • Test goal: one particular item to be covered
2 2 4 4 1 1 6 6 3 3 7 7 5 5 Goal-oriented search • Forward search • depth first • random • strategy? • breadth-first • Backward search • depth/breadth first • model checking
Tests for State Machines • For a state machine, a test case is just a finite sequence of external triggers and actions • A test goal is a particular entity of the state machine (region, pseudostate, transition, n-path, …; for each test and goal it is defined whether the test reaches this goal • The coverage of a test suite is the percentage of reached test goals • the coverage can either be measured during the execution of a test suite, or statically before execution
Coverage for State Machines • common coverage criteria for UML state machines • all-states • all-transitions • subsumes statement coverage • all configurations: all combinations of parallel substates • n-transition-coverage means all reachable transition sequences of length n are covered (esp.: pairs) • All-loop-free-paths • All-n-loop-paths • decision coverage, condition coverage, MC/DC, ... • all-requirements
Transition Tour • Nait 81: For a given LTS S, a transition tour is a sequence which takes S from the initial state s0, traverses every transition at least once, and returns to s0. • Example: detects output faults in the SUT http://www.protocol-engineering.tu-cottbus.de/vorlesung/ppts/Kapitel8_2.ppt
How to Construct a Transition Tour • Chinese postman algorithm (named after Mei Ko Kwan [1962]) • postman must deliver letters alongside roads of a district (NP-complete optimization problem) • Euler’s “Königsberger Brückenproblem” • solvable iff strongly connected and for all states, in-degree=out-degree • In order to make the LTS eulerian, we may have to traverse certain transitions several times • Hierholzer’s algorithm • use depth-first-search until you hit a cycle • extend the cycle at the first junction
ParTeG‘s Algorithm • Ideas: • include the boundary values of linear ordered type variables as defined by the transition guard expressions • Deduce the value of system attributes from input parameters according to preconditions • Coverage criteria-oriented approach: • Define test goals • Search backwards • Create and adapt abstract domains for input parameters
ParTeG – Start from Test Goals • Test goal: • Coverage criterion applied to a concrete model • Example: one state for All-States • Generate abstract test case • Find a path • Generate concrete test case • Find concrete input values
ParTeG – Find Paths • Generation of test cases: • Path from initial node to test goal contains conditions (e.g. OCL) • Due to conditions not each found path is feasible • Consequence: include conditions into search algorithm • Deal with the relations between OCL conditions along the path
ParTeG – Interpret Logical Expressions • Generation of Test Cases: • Classify all variables used in the expressions • Which variables can change? • Algorithm - for each guard: • try to find postconditions that influence the result of the guard • Combine guard and postcondition to a new condition • If there are changeable variables in the condition: continue search • Basic Idea: • Transform conditions on system attributes into conditions on input parameters • Use them as input partitions
Testing and Fault Models • Problem: What is a “good” test suite? • Much work has been done in trying to give a formal answer • Compare two LTSs A and B which are only “slightly” different: Test suite T is “good” if it can discover (the existence of) a difference • B is called a mutant of A, the difference is called a fault • Several methods for generation of test suites which are good in comparing LTSs exist • transition tour algorithm • W method • unique I/O method
Mutation Analysis • Testing to prove equivalence (or, in fact, to prove any preorder relation between models) has not been very successful (personal opinion). • Even a “complete” test suite may miss errors in the SUT • However, we can use these ideas to assess the effectiveness of test generation algorithms • How to convince your boss that you are using a “good” test case generator TeG? • Assume we are given a model M • Apply TeG to M and obtain test suite T(M) • Inject a fault into M to get a slightly different model M’ • if T(M) fails on M’ this is an indication that TeG is effective
In order to make this argument sound, we need to • repeat this process with several models M1,…,Mn • select mutation operators which model “real” faults, i.e. Mi’ could be a mistake made by an implementer • make sure that the effect of a mutation is not masked, i.e. it must be visible to the outside • Resulting statements are of the type: “For the models M1,…,Mn and mutation operators op1,…,opk, G1 can detect an average of 90% of all mutants, whereas G2 can detect 93%.” • Such statements have proven to be useful!