270 likes | 374 Views
Specifying and executing behavioral requirements. The play-in/play-out approach David Harel, Rami Marelly Weizmann Institute of Science, Israel An interpretation by Eric Bodden. Outline. Problem statement: How to specify requirements? Easily and Consistently Background (LSCs) Play-in
E N D
Specifying and executing behavioral requirements The play-in/play-out approach David Harel, Rami MarellyWeizmann Institute of Science, Israel An interpretation by Eric Bodden
Outline • Problem statement: How to specify requirements? • Easily and • Consistently • Background (LSCs) • Play-in • Play-out • Prototype tool, challenges • Conclusion
Requirements specification • Usually start from informal use case specification by domain experts • Then manual mapping to formal spec • Error prone • Tedious • No direct link between original spec and the actual implementation! • Can we do any better?
Play-in / play-out • Has similarities to machine learning and… • Actually also to the way one teaches people! Philosophy:“I’ll show you how it works and then you try yourself! Play-out Play-in
Workflow • Prototype a GUI or object model of you app. – dummy only! • Play-in behavior into this prototype. “Play engine” connects and automatically generates Live Sequence Charts • Replay (play-out) behavior, modify and generalize; verify • Give the debugged spec to developers
Live Sequence Charts (LSCs) • Similar to Message Sequence Charts and Sequence Diagrams but more powerful… • Universal charts: Specify behavior w.r.t. a given “prechart” (preamble). • Existential charts: Specify test cases that must hold in the universal LSCs.
The play engine • Backend which communicates with the prototype via Microsoft’s Component Object Model (COM) • Generates LSCs in play-in mode • Can then be generalized / modified • Are verified in play-out mode. • Also: Recording, modification and replay of traces.
Play-in – an example n1 “+” n2 “=” (n1+n2) Pre-chart Body
Play-out “Now that I have shown you how it works, try on your own!” (and I will watch you meanwhileand record what goes wrong)
Play-out by example 3 4 5 + 1 2
How does it work • “Execution manager” schedules “live copies” of universal LSCs -> behavior • At the same times evaluates existential LSCs -> testing • “Run manager” records/replays traces
Challenges and problems • Need for symbolic evaluation • Nondeterminism (?) • SELECT function • Scalability (?) • Performance (?) • Deadlocks/livelocks (?) • Support for multiple platforms (?)
Conclusion • Easy way to derive a “debugged” requirements specification. • Immediate link between use cases , specs and implementation. • Performance and scalability issues not discussed.