800 likes | 935 Views
Faults - analysis. Z proof and System Validation Tests were most cost-effective. Traditional “module testing” was arduous and found few faults, except in fixed-point numerical code. Proof metrics. Probably the largest program proof effort attempted…
E N D
Faults - analysis • Z proof and System Validation Tests were most cost-effective. • Traditional “module testing” was arduous and found few faults, except in fixed-point numerical code.
Proof metrics • Probably the largest program proof effort attempted… • c. 9000 VCs - 3100 Functional & Safety properties, 5900 from RTC generator. • 6800 discharged by simplifier (hint: buy a bigger workstation!) • 2200 discharged by SPARK proof checker or “rigorous argument.”
Proof metrics - comments • Simplification of VCs is computationally intensive, so buy the most powerful server available. • (1998 comment) A big computer is far cheaper than the time of the engineers using it! • (Feb. 2001 comment) Times have changed - significant proofs can now be attempted on a £1000 PC! • Proof of exception-freedom is extremely useful, and gives real confidence in the code. • Proof is still far less effort than module testing.
Difficult bits... • User-interface. • Tool support. • Introduced state.
User Interface • Sequential code & serial interface to displays. • Driving an essentially parallel user-interface is difficult. • e.g. Updating background pages, run-indicator, button tell-backs etc. • Some of the non-SIL4 displays were complex, output-intensive and under-specified in SRS.
Tool support • SPARK tools are now much better than they were five years ago! Over 50 improvements identified as a result of SHOLIS. • SPARK 95 would have helped. • Compiler has been reliable, and generates good code. • Weak support in SPARK proof system for fixed and floating point. • Many in-house static analysis tools developed: WCET analysis, stack analysis, requirements traceability tools all new and successful.
Introduced state • Some faults owe to introduced state: • Optimisation of graphics output. • Device driver complexity. • Co-routine mechanisms.
SHOLIS - Successes • One of the largest Z/SPARK developments ever. • Z proof work proved very effective. • One of the largest program proof efforts ever attempted. • Successful proof of exception-freedom on whole system. • Proof of system-level safety-properties at both Z and code level.
SHOLIS - Successes(2) • Strong static analysis removes many common faults before they even get a chance to arrive. • Software Integration was trivial. • Successful use of static analysis of WCET and stack use. • Successful mixing of SIL4 and non-SIL4 code is one program using SPARK static analysis. • The first large-scale project to meet 00-55 SIL4. SHOLIS influenced the revision of 00-55 between 1991 and 1997.
00-55/56 Resources • http://www.dstan.mod.uk/ • “Is Proof More Cost-Effective Than Testing?” King, Chapman, Hammond, and Pryor. IEEE Transactions on Software Engineering, Volume 26, Number 8. August 2000.
Programme • Introduction • What is High Integrity Software? • Reliable Programming in Standard Languages • Coffee • Standards Overview • DO178B and the Lockheed C130J • Lunch • Def Stan 00-55 and SHOLIS • ITSEC, Common Criteria and Mondex • Tea • Compiler and Run-time Issues • Conclusions
Outline • UK ITSEC and Common Criteria schemes • What are they? • Who’s using them? • Main Principles • Main Requirements • Practical Consequences • Example Project - the MULTOS CA
The UK ITSEC Scheme • The “I.T. Security Evaluation Criteria” • A set of guidelines for the development of secure IT systems. • Formed from an effort to merge the applicable standards from Germany, UK, France and the US (the “Orange Book”).
ITSEC - Basic Concepts • The “Target of Evaluation” (TOE) is an IT System (possibly many components). • The TOE provides security (e.g. confidentiality, integrity, availability)
ITSEC - Basic Concepts (2) • The TOE has: • Security Objectives (Why security is wanted.) • Security Enforcing Functions (SEFs) (What functionality is actually provided.) • Security Mechanisms (How that functionality is provided.) • The TOE has a Security Target • Specifies the SEFs against which the TOE will be evaluated. • Describes the TOE in relation to its environment.
ITSEC - Basic Concepts (3) • The Security Target contains: • Either a System Security Policy or a Product Rationale. • A specification of the required SEFs. • A definition of required security mechanisms. • A claimed rating of the minimum strength of the mechanisms. (“Basic, Medium, or High”, based on threat analysis) • The target evaluation level.
ITSEC Evaluation Levels • ITSEC defines 7 levels of evaluation criteria, called E0 through E6, with E6 being the most rigorous. • E0 is “inadequate assurance.” • E6 is the toughest! Largely comparable with the most stringent standards in the safety-critical industries.
ITSEC Evaluation Levelsand Required Informationfor Vulnerability Analysis
Evaluation • To claim compliance with a particular ITSEC level, a system or product must be evaluated against that level by a Commercial Licensed Evaluation Facility (CLEF). • Evaluation reports answers “Does the TOE satisfy its security target at the level of confidence indicated by the stated evaluation level?” • A list of evaluated products and systems is maintained.
ITSEC Correctness Criteria for each Level • Requirements for each level are organized under the following headings: • Construction - The Development Process • Requirements, Architectural Design, Detailed Design, Implementation • Construction - Development Environment • Configuration Control, Programming Languages and Compilers, Developer Security • Operation - Documentation • User documentation, Administrative documentation • Operation - Environment • Delivery and Configuration, Start-up and Operation
ITSEC Correctness Criteria - Examples • Development Environment - Programming languages and compilers • E1 - No Requirement • E3 - Well defined language - e.g. ISO standard. Implementation dependent options shall be documented. The definition of the programming languages shall define unambiguously the meaning of all statements used in the source code. • E6 - As E3 + documentation of compiler options + source code of any runtime libraries.
The Common Criteria • The US “Orange Book” and ITSEC are now being replaced by the “Common Criteria for IT Security Evaluation.” • Aims to set a “level playing field” for developers in all participating states. • UK, USA, France, Spain, Netherlands, Germany, Korea, Japan, Australia, CanAda, Israel... • Aims for international mutual recognition of evaluated products.
CC - Key Concepts • Defines 2 type of IT Security Requirement: • Functional Requirements • Defines behaviour of system or product. • What a product or system does. • Assurance Requirements • For establishing confidence in the implemented security functions. • Is the product built well? Does it meet its requirements?
CC - Key Concepts (2) • A Protection Profile (PP) - A set of security objectives and requirements for a particular class of system or product. • e.g. Firewall PP, Electronic Cash PP etc. • A Security Target (ST) - A set of security requirements for specifications for a particular product (the TOE), against which its evaluation will be carried out. • e.g. The ST for the DodgyTech6000 Router
CC Requirements Hierarchy • Functional and assurance requirements are categorized into a hierarchy of: • Classes • e.g. FDP - User Data Protection • Families • e.g. FDP_ACC - Access Control Policy • Components • e.g. FDP_ACC.1 - Subset access control • These are named in PPs and STs.
Evaluations Assurance Levels (EALs) • CC Defined 7 EALs - EAL1 through EAL7 • An EAL defines a set of functional and assurance components which must be met. • For example, EAL4 requires ALC_TAT.1, while EAL6 and EAL7 require ALC_TAT.3 • EAL7 “roughly” corresponds with ITSEC E6 and Orange Book A1.
The MULTOS CA • MULTOS is a multi-application operating system for smart cards. • Applications can be loaded and deleted dynamically once a card is “in the field.” • To prevent forging, applications and card-enablement data are signedby the MULTOS Certification Authority (CA). • At the heart of the CA is a high-security computer system that issues these certificates.
The MULTOS CA (2) • The CA has some unusual requirements: • Availability - aimed for c. 6 months between reboots, and has warm-standby fault-tolerance. • Throughput - system is distributed and has custom cryptographic hardware. • Lifetime - of decades, and must be supported for that long. • Security - most of system is tamper-proof, and is subject to the most stringent physical and procedural security. • Was designed to meet the requirements of U.K. ITSEC E6. • All requirements, design, implementation, and (on-going) support by Praxis Critical Systems.
The MULTOS CA - Development Approach • Overall process conformed to E6 • Conformed in detail where retro-fitting impossible: • development environment security • language and specification standards • CM and audit information • Reliance on COTS for E6 minimized or eliminated. • Assumed arbitrary but non-byzantine behaviour
Development approach limitations • COTS not certified (Windows NT, Backup tool, SQL Server…) • We were not responsible for operational documentation and environment • No formal proof • No systematic effectiveness analysis
System Lifecycle • User requirements definition with REVEALTM • User interface prototype • Formalisation of security policy and top level specification in Z. • System architecture definition • Detailed design including formal process structure • Implementation in SPARK, Ada95 and VC++ • Top-down testing with coverage measurement
Some difficulties... • Security Target - What exactly is an SEF? • No one seems to have a common understanding… • “Formal description of the architecture of the TOE…” • What does this mean? • Source code or hardware drawings for all security relevant components… • Not for COTS hardware or software.
Use of languages in the CA • Mixed language development - the right tools for the right job! • SPARK 30% “Security kernel” of tamper-proof software • Ada95 30% Infrastructure (concurrency, inter-task and inter-process communications, database interfaces etc.), bindings to ODBC and Win32 • C++ 30% GUI (Microsoft Foundation Classes) • C 5% Device drivers, cryptographic algorithms • SQL 5% Database stored procedures
Use of SPARK in the MULTOS CA • SPARK is almost certainly the only industrial-strength language that meets the requirements of ITSEC E6. • Complete implementation in SPARK was simply impractical. • Use of Ada95 is “Ravenscar-like” - simple, static allocation of memory and tasks. • Dangerous, or new language features avoided such as controlled types, requeue, user-defined storage pools etc.
Conclusions - Process Successes • Use of Z for formal security policy and system spec. helped produce an indisputable specification of functionality • Use of Z, CSP and SPARK “extended” formality into design and implementation • Top-down, incremental approach to integration and test was effective and economic
Conclusions - E6 Benefits and Issues • E6 support of formality is in-tune with our “Correctness by Construction” approach • encourages sound requirements and specification • we are more rigorous in later phases • High-security using COTS both possible and necessary • cf safety world • E6 approach sound, but clarifications useful • and could gain even higher levels of assurance... • CAVEAT • We have not actually attempted evaluation • but benefits from developing to this standard
ITSEC and CC Resources • ITSEC • www.cesg.gov.uk • Training, ITSEC Documents, UK Infosec Policy, “KeyMat”, “Non Secret Encryption” • www.itsec.gov.uk • Documents, Certified products list, Background information. • Common Criteria • csrc.nist.gov/cc • www.commoncriteria.org • Mondex • Ives, Blake and Earl Michael: Mondex International: Reengineering Money. London Business School Case Study 97/2. See http://isds.bus.lsu.edu/cases/mondex/mondex.html
Programme • Introduction • What is High Integrity Software? • Reliable Programming in Standard Languages • Coffee • Standards Overview • DO178B and the Lockheed C130J • Lunch • Def Stan 00-55 and SHOLIS • ITSEC, Common Criteria and Mondex • Tea • Compiler and Run-time Issues • Conclusions
Programme • Introduction • What is High Integrity Software? • Reliable Programming in Standard Languages • Coffee • Standards Overview • DO178B and the Lockheed C130J • Lunch • Def Stan 00-55 and SHOLIS • ITSEC, Common Criteria and Mondex • Tea • Compiler and Run-time Issues • Conclusions
Outline • Choosing a compiler • Desirable properties of High-Integrity Compilers • The “No Surprises” Rule
Choosing a compiler • In a high-integrity system, the choice of compiler should be documented and justified. • In a perfect world, we would have time and money to: • Search for all candidate compilers, • Conduct an extensive practical evaluation of each, • Choose one, based on fitness for purpose, technical features and so on...
Choosing a compiler (2) • But in the real-world… • Candidate set of compilers may only have 1 member! • Your client’s favourite compiler is already bought and paid for… • Bias and/or familiarity with a particular product may override technical issues.
Desirable Properties of an HI compiler • Much more than just “Validation” • Annex H support • Qualification • Optimization and other “switches” • Competence and availability of support • Runtime support for HI systems • Support for Object-Code Verification
What does the HRG Report have to say? • Recommends validation of appropriate annexes - almost certainly A, B, C, D, and H. Annex G (Numerics) may also be applicable for some systems. • Does not recommend use of a subset compiler, although recognizes that a compiler may have a mode in which a particular subset is enforced. • Main compiler algorithms should be unchanged in such a mode.
HRG Report (2) • Evidence required from compiler vendor: • Quality Management System (e.g. ISO 9001) • Fault tracking and reporting system • History of faults reported, found, fixed etc. • Availability of test evidence • Access to known faults database • A full audit of a compiler vendor may be called for.
Annex H Support • Pragma Normalize_Scalars • Useful! Compilers should support this, but remember that many scalar types do not have an invalid representation. • Documentation of Implementation Decisions • Yes. Demand this from compiler vendor. If they can’t or won’t supply such information, then find out why not!
Annex H Support (2) • Pragma Reviewable. • Useful in theory. Does anyone implement this other than to “turn on debugging”? • Pragma Inspection_Point • Yes please. Is particularly useful in combination with hardware-level debugging tools such as in-circuit emulation, processor probes, and logic analysis.
Annex H Support (3) • Pragma Restrictions • Useful. • Some runtime options (e.g. Ravenscar) imply a predefined set of Restrictions defined by the compiler vendor. • Better to use a coherent predefined set than to “roll your own” • Understand effect of each restriction on code-gen and runtime strategies. • Even in SPARK, some restrictions are still useful - e.g. No_Implicit_Heap_Allocation, No_Floating_Point, No_Fixed_Point etc.
Compiler Qualification • “Qualification” (whatever that means) of a full compiler is beyond reach. • Pragmatic approaches: • Avoidance of “difficult to compile” language features in HI subset. • In service history • Choice of “most commonly used” options • Access to faults history and database • Verification and Validation • Object code verification (last resort!)