230 likes | 458 Views
Building Reliable Software. Requirements and Methods. Reliability. Reliability means that the level and frequency of failure is acceptable We are not requiring no failures at all Merely an acceptable level Failure is measured pragmatically. Correctness.
E N D
Building Reliable Software Requirements and Methods
Reliability • Reliability means that the level and frequency of failure is acceptable • We are not requiring no failures at all • Merely an acceptable level • Failure is measured pragmatically
Correctness • Software is correct if its behavior is correct • Correctness of behavior has to be defined • The definition is called a specification • So correctness is a matter of the implementation matching the specification • And is typically interpreted formally
Correctness and Reliability • Correct but unreliable • Can result from an incorrect specification • Reliable but incorrect • Can result from a program that does not exactly meet its specification, but which works well enough. • Reliability is really what we are after • Correctness is a means to this end
Requirements for Reliability • If a failure has high cost, then reliability becomes important. • How important depends on the cost • Most software is typically not very reliable
Safety Critical Software • Software is said to be safety critical if a failure can cause loss of life or severe injury • Nuclear power plant control • Breaking systems in cars • Avionics (military and commercial) • Train signal systems • Dam control systems • etc
SC Software – An example • The Boeing 777 • Fully computerized control • All avionics programmed in Ada • All safety critical • No flight problems ever encountered • Except in entertainment system • Which is in C++ and not safety critical
How do We approach Reliability • Use reliable tools • Program carefully • Test thoroughly • Is that good enough? • Would you fly on an airplane if that was the best we could say about our approach?
Safety Critical Programming • We can’t depend on people being careful • Control the tools that are used • Control the process that is used • Develop formal specifications • Use formal design techniques • Use formal techniques for proving correctness • Do systematic testing • Measure how we are doing quantitatively
Control Tools • We need simple programming languages • Avoid “dangerous” constructs • Dynamic storage allocation • Non-determinacy • Recursion • Typically very simple subsets are used • SPARK (www.praxis-cs.co.uk)
More on Tools • Languages with strenuous compile time checking are more desirable • Redundancy is desirable • Say what you want multiple times and check consistency of usage • Ease of writing is irrelevant compared to ease of analysis and writing
More on SPARK • SPARK is a very simple subset of Ada • No allocators, tasks, generics, exceptions etc • But adds static assertions • What variables are accessed and modified • What routines call one another etc • All assertions are verified at compile time
More on Processes • Organized configuration control • Procedural standards (ISO 9000) • Use of metrics • Strict coding practices • Formal code reviews • Clean room techniques
More on Formal Methods • A program is a mathematical object • We can reason about programs using standard mathematical techniques • Proving properties of programs • Proof of correctness • More limited proofs • Careful examination always helps • Formalizing this examination helps more!
More on Specification • A formal specification is one about which we can reason mathematically • And for which correctness is a mathematical property • Specification languages • Very high level, not necessarily compilable • But how do we know the spec is correct?
Formal Testing • Coverage testing • Every source line executed at least once • Decision testing • Every decision path tested • Flow testing • Every definition use chain tested
Def-Use testing • Program fragment • A := 1;…A := 2;…B := A;…C := A; • Suppose both definitions reach both references • Then tests must cover all four paths
The Basic Idea Behind Testing • We trust software much more if it has executed once than if it has never executed • So let’s set up testing so that as much as possible has been executed at least one • Testing does not guarantee reliability • But it helps
Source/Object Traceability • Reasoning at the source level is useful • But it is not good enough • Why? Because we can’t trust compilers • So we need to reason and test at the object (generated machine code) level • We need to trace source-to-object easily • Optimization is a problem!
Safety Critical Standards • DO-178B (also ED-12B) • A safety critical standard used for both military and commercial avionics in the US and Europe • Requires full documentation of process • Forbids certain constructs • Requires certain aspects (e.g. traceability) • Requires formal testing • Does not rely on any one aspect
Other Guidelines • The ISO HRG of WG 9 • Prepares guidelines • HRG report catalogs safety critical techniques
Is this relevant only for SC • Safety Critical programs MUST be reliable • But reliability is generally desirable • Reliability is expensive • We need a tradeoff • But we need to know the techniques to make the tradeoff accurately • So the answer is “not at all”