190 likes | 329 Views
Formal Methods of Systems Specification Logical Specification of Hard- and Software. Prof. Dr. Holger Schlingloff Institut für Informatik der Humboldt Universität and Fraunhofer Institut für Rechnerarchitektur und Softwaretechnik. Specification Based Testing. Last week: assertion languages
E N D
Formal Methods of Systems SpecificationLogical Specification of Hard- and Software Prof. Dr. Holger Schlingloff Institut für Informatik der Humboldt Universität and Fraunhofer Institut für Rechnerarchitektur und Softwaretechnik
Specification Based Testing • Last week: assertion languages • Anna for Ada • OCL for UML • Java Modeling Language (JML) for Java • Spec# for C# • PSL for VHDL • ACSL for C (see Frama-C) • No huge success (yet) • verification burdon upon programmer • increases time & cost of programming • Use of specifications for testing? • Specifications are written by testers for testing • Implementation (IUT) is a black box - only executable
Systems Development Process • Test spec: „Spec# model program“ • Test model: „FSM model exploration“ • Test case: „scenario“ Requirements System Spec Test Spec Test Model System Model System Impl. Test Cases
Test Specification • Spec# model program serves as • executable specification • for simulation/animation of intended behaviour • test generator • test models are obtained from model program by abstraction • test oracle • assertion of safety properties, pre / postconditions • Extra effort to derive!!! • pays off
SR(Τ) ¬R ¬S R ¬S SR() SS(Τ) SS() ¬R S R S SR(Τ) SR() Example: Calculator • [Action] means an interface to the user or the SUT (system under test) • „Unit of behaviour“, may change state (information of a system, value of state variables) • requires as a declarative contract • invariants, quantifiers, … • full Spec#-language available!
Spec Explorer Tool http://staff.washington.edu/jon/icfem/specs-icfem.html Specification based testing • Development of Spec# model programs • "literate programming" editor, debugger • Validation of Spec# model programs • simulation (Main execution) • exploration (FSM generation) • visualization • safety (reachability), liveness analysis • static analyses (BoogiePL) • Test case generation • scenarios from explored FSMs • complete coverage of spec or stochastic testing • Test execution • offline test case generation for conformance testing • on-the-fly testing with spec as test oracle
1) Model program Provides expected results for (behavior al 7) Log of specification) c onformance - checking Explored by Spec Explorer test run 2) Explored s scenarios 4) Test Generate (possible runs as suites finite state machine) Provides actual Are run by Visualized by results for 5) User - written 3) Graph views wrapper 6) Implementation Invokes (API driver) under test Spec Explorer Artefacts …\Spec Explorer\doc\SpecExplorerReference.doc
Test Spec vs. System Spec • System spec: used to derive the implementation • transformational development • correctness of derivation steps • assertion checks can be switched on or off • Test spec: aimed at testing and validating the SUT • investigate properties of model programs wrt SUT • generate and execute test suites • assertions as test oracle • Different intentions, different levels of abstraction!
Modeling: Abstraction • Minimum code needed to generate scenarios of interest –no need to be comprehensive • Adequate level of abstraction (state variables, actions in SUT to test): • Global point of view, each agent can see each other agents' state • All state information (files, messages, …) in model variables • Each action (at chosen level of abstraction) is coded as a method in the model (need not correspond 1:1 to methods in IUT) • Model program has a single thread, interleaving actions can represent concurrency • Actions are atomic, no interleaving within action bodies • Multiple assignments within an action can represent parallelism • Model introduces new state space, reconcile with SUT
Modeling: Coding • To code each action • When is it enabled? (requires ...) • multiple actions enabled in the same state models nondeterminism • allows for interleaving concurrency • What (if anything) does it return? (return ...) • What is next state? Is it different? (assignments, ... = ... ) • Distinguish top level [Action] methods from helper methods • Possibly write Main method(s) to simulate scenario(s)
Example(s) • Stack • Counting Problem (Prisoner‘s Dilemma)
Test design • Generating test suites from Spec# model • Rationale: find equivalence classes of behaviour;no need for exhaustive testing • Different fault types call for different kinds of tests • Wrong logic/wrong expression • Complete but minimal coverage over small domains • Problem scaling up data structures (like hash table resize, editor buffer gap) • Vary a few properties over large ranges • Unreliable infrastucture, hidden state leaks out • Long test cases, revisit the same (model) states
Exploration • Exploration generates a finite state machine (FSM) from the model program for • validation (visualization, check safety and liveness), and • offline test case generation • Exploration executes the model program in a special environment, building the FSM as it goes. • each invocation (method call including args) is a transition in the FSM • execute all enabled invocations from a state (backtracking, in effect) • execute each method with all combinations of arguments from given finite domain (can simulate internal nondeterminism with additional arguments). • Generated FSM is an underapproximation of the model program • can be nondeterministic
Exploration algorithm • Exploration treats model program state as first class. • Spec# compiler generates code with storage management hooks • explorer creates set of hyperstates as approximation of sets of states • executes the actions of the given spec on concrete states of that spec and building up the hyperstates • end state of a new transition is added to the frontier if the transition is relevant (is an improvement towards goal) • Abstraction vs. Exploration • abstraction (e.g., hiding of variables) yields over-approximation (more transitions than „really“) • model exploration yields under-approximation
Test case generation • Offline test case generation: traverse FSM generated by exploration • Different traversal algorithms achieve different coverage • Postman tour gives minimal transition coverage (not path coverage) • Identify "accepting states" where test run may terminate • Identify "cleanup actions" that make progress toward accepting state • Tool ensures each test case reaches accepting state (via cleanup actions) • Tool can store test suite internally for subsequent conformance testOR tool can write out test suite as C# program
Conformance testing • Tool can act as test harness for conformance testing • Tool can reference and execute IUT (binary, DLL) • Model and IUT can be at different levels of abstraction, must reconcile model state space with IUT state space • Write wrapper or test driver around IUT • Wrapper can translate IUT values to model values • [Probe] actions can return (translated) IUT state variables • Action bindings, type bindings defined in configuration • Object bindings made dynamically • Lockstep execution, model with IUT, check: • actions are enabled • correct return values