520 likes | 655 Views
System Test Planning and the usefulness of a “Safety Checklist”. ECEN5543. Plan for class period. Additional notations to analyze requirements, prepare for design and prepare for system test plan Event tables State transition tables
E N D
System Test Planningand the usefulness of a “Safety Checklist” ECEN5543 R. Dameron, University of Colorado, ECEN5033, System Test Planning
Plan for class period • Additional notations to analyze requirements, prepare for design and prepare for system test plan • Event tables • State transition tables • Decision tables (plus an old but useful shorthand), aka condition tables • Overview of system test categories • What to use to determine tests in each category • Includes analysis of Safety Checklist with respect to stand-alone software (Do Safety Checklist on use case as an experiment.) R. Dameron, University of Colorado, ECEN5033, System Test Planning
More Requirements Analysis ToolsUseful for System Testing Examples from Software Engineering Concepts, Richard Fairley, McGraw , Hill, 1985. Out of print. R. Dameron, University of Colorado, ECEN5033, System Test Planning
System Sequence Diagram -- SSD External actors interaction with system Must define scope to know what is external Actors can be other programs, other products, people High Level view Sequence diagrams are part of UML Same rules used to create SYSTEM sequence diagrams R. Dameron, University of Colorado, ECEN5033, System Test Planning
SSD R. Dameron, University of Colorado, ECEN5033, System Test Planning
State Transition Table for “split” routine R. Dameron, University of Colorado, ECEN5033, System Test Planning
Draw the state transition diagram for this state transition table • Notation reminders: • Circle = state • Arc = transition • X/y = input/output R. Dameron, University of Colorado, ECEN5033, System Test Planning
Present State Input or Event Action Output Next State ST1. Idle card inserted request for PIN Waiting for PIN ST2. Waiting for PINPIN entered display asterisks Validating PIN ST3. Waiting for PINcancel display msg Ejecting ST4. Validating PINindicates “valid” display choices Waiting for customer transaction choice ST5. Validating PINindicates “stolen” display “stolen” confiscating ST6. Validating PINindicates “invalid” display “invalid” Waiting for PIN ST7. Waiting for customer transaction choice Cancel display “cancel” Ejecting ST8. Waiting for customer transaction choice Balance Query selected Processing query continued on next slide R. Dameron, University of Colorado, ECEN5033, System Test Planning
ST9. Waiting for customer transaction choice Withdrawal selected Processing w/d ST10. confiscating Card confiscated terminating ST11. Processing query Rejected for this user display “rejected” Ejecting ST12. Processing query Query OK display printing printing ST13. Processing withdrawal ok amount display ok msg dispensing ST14. Processing withdrawal not ok amount display refusal Ejecting ST15. Printing transaction complete print receipt ejecting ST16. Dispensing sufficient cash in ATM cash printing ST17. Dispensing insufficient cash in ATM disp “insufficient cash” ejecting ST18. Ejecting card ejection started display msg to take card terminating ST19. terminating card ejection complete display ending msg idle R. Dameron, University of Colorado, ECEN5033, System Test Planning
State Transition Diagram - incomplete card inserted/ PIN inserted/ waiting for PIN Idle validating PIN “invalid” card ej complete “cancel” “stolen” ejecting “valid” terminat-ing confis-cating waiting for cust xaction card confis’d “cancel” R. Dameron, University of Colorado, ECEN5033, System Test Planning
F6 D11* split DS D11* D12* DE D12* F7 Data flow diagram for “split” Notation: Circle = action Arc = data direction Arc label = data label = data sink label = data source = data store R. Dameron, University of Colorado, ECEN5033, System Test Planning
Data Flow Diagram for ATM transaction request ATM customer display transactions Validate User validated user dispatch request card request for pin pin R. Dameron, University of Colorado, ECEN5033, System Test Planning
2-dimensional event table Action;action = sequential actions. Action, action = concurrent actions. X = impossible. --- = no action required. R. Dameron, University of Colorado, ECEN5033, System Test Planning
Decision table Actions Conditions X = do that action; Y = yes, N = no, -- = don’t care R. Dameron, University of Colorado, ECEN5033, System Test Planning
Decision tables • Ambiguous = two identical Rule columns with different actions • Redundant = identical Rule columns and identical actions • Incomplete = failure to specify an action for a Rule column • Karnaugh map is more succinct R. Dameron, University of Colorado, ECEN5033, System Test Planning
Incomplete and multiply-specifieddecision table as a Karnaugh map C2 C3 C1 Example from Software Engineering Concepts, Richard Fairley, McGraw , Hill, 1985. Out of print. R. Dameron, University of Colorado, ECEN5033, System Test Planning
Incomplete and multiply-specifieddecision table C2 C3 C1 Example from Software Engineering Concepts, Richard Fairley, McGraw , Hill, 1985. Out of print. R. Dameron, University of Colorado, ECEN5033, System Test Planning
Equivalent to this decision table:(incomplete and multiply-specified) Example from Software Engineering Concepts, Richard Fairley, McGraw , Hill, 1985. Out of print. R. Dameron, University of Colorado, ECEN5033, System Test Planning
General Test Categories • Functional • Performance • Stress --------------------- not system test------------ • Glass-box, sometimes called white-box R. Dameron, University of Colorado, ECEN5033, System Test Planning
Functional • Success Scenario paths of Use Cases • All alternate paths of Use Cases – if intentionally not implemented in a particular release, how is their absence handled? R. Dameron, University of Colorado, ECEN5033, System Test Planning
Performance • Performance • How does the system perform under normal conditions? • Is it adequate? • “WHAT is performance” depends on application • Can be extended to include those quality ranges that can be tested R. Dameron, University of Colorado, ECEN5033, System Test Planning
Stress!! • How does system behave under unreasonable conditions? • Evaluates robustness R. Dameron, University of Colorado, ECEN5033, System Test Planning
Performance vs. Stress!! • Specified performance criteria are tested as performance tests (duh!) • Unspecified performance criteria are tested as stress conditions • Stress tests also include conditions outside the specified performance criteria R. Dameron, University of Colorado, ECEN5033, System Test Planning
How do we decide what to test for performance and stress conditions? • Targeting Safety-Related Errors During Software Requirements Analysis, Robyn R. Lutz, JPL, CalTech, Proceedings of the ACM SigSoft Symposium on the Foundations of Software Engineering, 1993 • The requirements discussed in the above paper provide excellent guidelines R. Dameron, University of Colorado, ECEN5033, System Test Planning
Safety Checklist for safety-critical embedded systems • Two most common causes of safety-related software errors • Inadequateinterface requirements • Robustness issues -- discrepancies between • The documented requirements • Requirements actually needed for correct functioning • Usage of the checklist reduces safety-related software errors R. Dameron, University of Colorado, ECEN5033, System Test Planning
Earlier Study of Causes of S-R SW Errors – ref. 11 in the paper • Errors identified as potentially hazardous to a system tend to be produced by different error mechanisms than non-safety-related software errors • S-R sw errors found during integration & testing • Misunderstandings of sw’s interfaces to rest of sys • Discrepancies between documented reqs & necessary requirements for correct functioning, i.e., robustness • (In other words, the documented requirements are inadequate and therefore … wrong.) R. Dameron, University of Colorado, ECEN5033, System Test Planning
Computed Observed Measured value or condition True Specified Theoretically correct value or condition What’s an error? A discrepancy between: Lutz: “Safety-related” if the systems safety analyst determines during standard error-correction process that the error represents potentially significant or catastrophic failure effects. R. Dameron, University of Colorado, ECEN5033, System Test Planning
Effort to target known causes during requirements phase • Interfaces Challenge: • Specify correctly the software/system interfaces in complex embedded systems with software distributed among various hardware components – some of which are not yet determined • Robustness Challenge: • Many s-r software errors involve inadequate software responses to extreme conditions or extreme values (stress conditions) • For “extreme”, read “boundary and beyond” R. Dameron, University of Colorado, ECEN5033, System Test Planning
Informal practices Bridge Future formal analysis Formal vs. Informal • Lutz’ Safety Checklist is ok for a development process without formal specification languages (whew!) • And without (complete) finite-state-machine modeling • DOES focus on software/system interfaces, failure modes, timing, and boundary conditions and values R. Dameron, University of Colorado, ECEN5033, System Test Planning
2 key components of any sw requirements methodology • To aid in determining the requirements • To represent the software requirements in specifications • This Safety Checklist focuses on first, still must incorporate in specification • Determination technique does not preclude any specification techniques R. Dameron, University of Colorado, ECEN5033, System Test Planning
NOTE: Benefit of Formal Inspections • Example given is a formal inspection on 2,000,000 lines of code – seismic processing • Can add the Safety Checklist to any other checklists being used in requirements reviews which are the first stage of system testing (prior to architecture design) R. Dameron, University of Colorado, ECEN5033, System Test Planning
The Formal Approach • Build a formal, finite-state model of the requirements specifications • Analyze the model to ensure its properties match the desired behavior • State criteria (as formal predicates – logical relationships) that must hold in the model R. Dameron, University of Colorado, ECEN5033, System Test Planning
Lutz’ Less Formal Approach • Translate the criteria into informal, natural-language format • Formatting concerns as a checklist avoids need to build the complete finite-state model R. Dameron, University of Colorado, ECEN5033, System Test Planning
Less Is More –Informal process extended to embedded systems • Safety Checklist allows for • Multiple processors • Concurrently executing processes • Redundant resources to manage • Externally commanded state changes • State changes not visible • Typical of many complex, embedded systems with timing constraints and safety-critical functions • Even in stand-alone system, can have operating system environment, “apparent concurrency”, externally commanded state changes, invisible state changes R. Dameron, University of Colorado, ECEN5033, System Test Planning
Checklist regarding Interfaces R. Dameron, University of Colorado, ECEN5033, System Test Planning
Interfaces – Is the software’s response specified: • For every input, what is the response to out-of-range values? • For not receiving an expected input ? • If appropriate to time out – a. length, b. when to start counting, c. latency • If input arrives when it shouldn’t • On given input, will software always follow the same path? (deterministic) • If not, is that a problem? R. Dameron, University of Colorado, ECEN5033, System Test Planning
Interfaces, continued • Is each input bounded in time? • Specify earliest time to accept input • Latest time to consider data to be valid 6. Is minimum and maximum arrival rate specified for each input? For each communication path? • If interrupts are masked or disabled, can events be lost? • Include cooperating processes in “apparent concurrency” R. Dameron, University of Colorado, ECEN5033, System Test Planning
Interfaces, continued 2 • Can output be produced faster than it can be used (absorbed) by the interfacing module? Is overload behavior specified? • Is all data output [to the buses from the sensors] used by the software? If not, is some required function omitted from the spec? • Can input received before startup, while offline, or after shutdown influence the next software startup behavior? Are any values retained in hw or sw? The earliest? The most recent? R. Dameron, University of Colorado, ECEN5033, System Test Planning
Checklist regarding Robustness R. Dameron, University of Colorado, ECEN5033, System Test Planning
Robustness 11. If performance degradation is the chosen response, is the degradation predictable? 12. Are there sufficient delays incorporated into the error-recovery responses, e.g. don’t return to normal state prior to managing the error 13. Are feedback loops specified where appropriate to compare the actual effects of outputs on the system with the predicted effects? R. Dameron, University of Colorado, ECEN5033, System Test Planning
Robustness, continued • Are all modes and modules reachable (used in some path through the code)? Superfluous? Other logic error? • If a hazards analysis has been done, does every path from a hazardous state (a failure-mode) lead to a low-risk state? • Are the inputs identified which, if not received, can lead to a hazardous state or can prevent recovery (single-point failures)? R. Dameron, University of Colorado, ECEN5033, System Test Planning
Usefulness? • Safety checklist has been demonstrated to “ask the right questions” • Not sufficient to preclude introducing errors • Necessary although not sufficient R. Dameron, University of Colorado, ECEN5033, System Test Planning
#1 #2 #4 May I have the envelope, please … Not every hazardous state led to a low-risk state. Error-recovery responses incorporated insufficient delays. Input arrived when it shouldn’t and no response was specified; response defaulted to unintended behavior. Response not specified for out-of-range values that were possible for some inputs #5 Output produced too fast for interfacing module #3 R. Dameron, University of Colorado, ECEN5033, System Test Planning
Allows tailoring • Focuses on historically troublesome aspects of safety-critical, embedded software • Avoids over-specification of well-understood or low-risk requirements • Tailor to level of technical or historical risk R. Dameron, University of Colorado, ECEN5033, System Test Planning
First step toward safety constraints • Many items that it identifies are system hazards • Can be used to identify safety constraints • Not yet ready for formal prediction • How use for informal prediction of error prone factors R. Dameron, University of Colorado, ECEN5033, System Test Planning
After Requirements Are Improved … • How do we ensure that requirements are implemented and maintained? • After code is written (new code or bug fixes); note: difficult to unit test these issues • After new requirements are added • After old requirements are modified • Role of reviews • Code the invariants where appropriate • System tests to test use cases and the safety checklist R. Dameron, University of Colorado, ECEN5033, System Test Planning
Create a system test plan – IEEE Std. 829 • Test the Success Scenario and conditions that lead to alternative paths of use cases • If possible, test to verify the relevant safety checklist items – “safety” may not be main concern but correct interfaces and robustness are. • If any resources are shared among processes, review and test for correctness of mutual exclusion. (SW Eng of Multi-program Sys) • If “cooperating processes”, verify suspension happens correctly, a suspended process restored when appropriate, restoration correct. R. Dameron, University of Colorado, ECEN5033, System Test Planning
1.0 Introduction 2.0 Test Items 3.0 Tested Features 4.0 Features Not Tested (per cycle) [Repeat 5.0 for each system level test.] 5.0 Testing Strategy and Approach 5.1 Syntax 5.2 Description of functionality 5.3 Arguments for Test 5.4 Expected Output 5.5 Specific Exclusions 5.6 Dependencies 5.7 Test Case Success/Failure Criteria IEEE 829 Standard Test Plan Outline - 1 R. Dameron, University of Colorado, ECEN5033, System Test Planning
IEEE 829 Standard Test Plan Outline - 2 6.0 Pass/Fail Criteria for the Complete Test Cycle 7.0 Entrance Criteria/Exit Criteria 8.0 Test-Suspension Criteria and Resumption Req’s 9.0 Test Deliverables/Status Communications Vehicles 10.0 Testing Tasks 11.0 Hardware and Software Requirements (for testing) 12.0 Problem Determination and Correction Responsibilities R. Dameron, University of Colorado, ECEN5033, System Test Planning
IEEE 829 Standard Test Plan outline - 3 13.0 Staffing and Training Needs/Assignments 14.0 Test Schedules 15.0 Risks and Contingencies 16.0 Approvals R. Dameron, University of Colorado, ECEN5033, System Test Planning