350 likes | 509 Views
Conformance Test Experiments for Distributed Real-Time Systems. Rachel Cardell-Oliver Complex Systems Group Department of Computer Science & Software Engineering The University of Western Australia July 2002. Talk Overview. Research Goal: to build correct distributed real-time systems
E N D
Conformance Test Experiments for Distributed Real-Time Systems Rachel Cardell-Oliver Complex Systems Group Department of Computer Science & Software Engineering The University of Western Australia July 2002
Talk Overview Research Goal: to build correct distributed real-time systems 1. Distributed Real-Time Systems 2. Correctness: Formal Methods & Testing 3. Experiments: A New Test Method
System Characteristics • React or interact with their environment • Must respond to events within fixed time • Distributed over two or more processors • Fixed network topology • Each processor runs a set of tasks • Processors embedded in other systems • Built with limited HW & SW resources
Testing Issues for these Systems • Many sources of non-determinism • 2 or more processors with independent clocks • Set of tasks scheduled on each processor • Independent but concurrent subsystems • Inputs from uncontrolled environment e.g. people • Limited resources affects test control e.g. speed • Our goal: to develop robust test specification and execution methods
Design (intended behaviour) behaves like this? Implementation Goal: Building Correct Systems
Software Tests • are experiments designed to answer the question “does this implementation behave as intended?” • Defect tests are tests which try to try to force the implementation NOT to behave as intended • Our focus is to specify and execute robust defect tests
Related Work on Test Case Generation • Chow TSE 1978 deterministic Mealy FSM • Clarke & Lee 1997 timed requirements graphs • Neilsen TACAS 2000 event recording automata • Cardell-Oliver FACJ 2000 Uppaal timed automata • Specific experiments are described by a test case: a timed sequence of inputs and outputs • Non-determinism is not handled well (if at all) • Not robust enough for our purposes
Our Method for Defect Testing • Identify types of behaviour which are likely to uncover implementation defects (e.g. extreme cases) • Describethese behaviours using a formal specification language • Translate the formal test specification into a test program to run on a test driver • Connect the test driver to the system under test and execute the test program • Analysetest results (on-the-fly or off-line)
Step 1 – Identify interesting behaviours Usually extreme behaviours such as • Inputs at the maximum allowable rate • Maximum response time to events • Timely scheduling of tasks
Example Property to Test Whenever the light level changes from low to high then the valve starts to open within 60cs assuming the light level alternates between high and low every 100cs
Step 2 – choose a formal specification language • which is able to model • real-time clocks • persistent data • concurrency and communication • use Uppaal Timed Automata (UTA)
Example UTA for timely response m:=0 m:=0
Writing Robust Tests with UTA • Test cases specify all valid test inputs • no need to test outside these bounds • Test cases specify all expected test outputs • if an output doesn’t match then it’s wrong • No need to model the implementation explicitly • Test cases may be concurrent programs • Test cases are executed multiple times
Step 3. Translate Spec to Exec • UTA specs are already program-like • Identify test inputs and how they will be controlled by the driver • Identify test outputs and how they will be observed by the driver • then straightforward translation into NQC (not quite C) programs
Example NQC for timely response task dolightinput() { while (i<=MAXRUNS) { Wait(100); setlighthigh(OUT_C); setlighthigh(OUT_A); record(FastTimer(0),HIGH-LIGHT); i++; Wait(100); setlightlow(OUT_C); setlightlow(OUT_A); record(FastTimer(0),LOW-LIGHT); i++; }// end while }// end task task monitormessages() { while (i<=MAXRUNS) { monitor (EVENT_MASK(1)) { Wait(LONGINTERVAL); } catch (EVENT_MASK(1)) { record(FastTimer(0), Message()); i++; ClearMessage(); } } // end while } // end task
Concluding Observations • Defect testing requires active test drivers able to control extreme inputs and observe relevant outputs • Test generation methods must take into account the constraints of executing test cases • Robust to non-determinism in the SUT • Measure what can be measured • Engineers must design for testability
Results 1:Observation Issues Things you can’t see Probe effect Clock skew Tester speed
Things you can’t see • Motor outputs can’t be observed directly because of power drain • so we used IR messages to signal motor changes • But we can observe • touch & light sensors via piggybacked wires • broadcast IR messages
The probe effect • We can instrument program code to observe program variables • but the time taken to record results disturbs the timing of the system under test • Solutions • observe only externally visible outputs • design for testability: allow for probe effects
Clock Skew • Clocks may differ for local results from two or more processors • Solutions: • user observations timed only by the tester • including tester events gives a partial order
Tester speed • Tester must be sufficiently fast to observe and record all interesting events • Beware • scheduling and monitoring overheads • execution time variability • Solution: use NQC parallel tasks and off-line analysis for speed
Results 2:Input Control Issues Input value control Input timing control
Input Values can be Controlled • Touch sensor input (0..1) • good by piggybacked wire • Light sensor input (0..100) • OK by piggybacked wire • Broadcast IR Messages • good from tester • Also use inputs directly from the env. • natural light or button pushed by hand
Input Timing Control is hard to control • Can’t control input timing precisely • e.g. offered just before SUT task is called • Solution: Run tests multiple times and analyse average and spread of results • Can’t predict all system timings for a fully accurate model • c.f. WCET research, but our problem is harder
Conclusions from Experiments • Defect testing requires active test drivers able to control extreme inputs and observe relevant outputs • Test generation methods must take into account the constraints of executing test cases • Robust to non-determinism in the SUT • Measure what can be measured • Engineers must design for testability