490 likes | 631 Views
An Integrated Approach to Security Management. M. Shereshevsky, R. Ben Ayed, A. Mili Monday, March 14, 2005. Target: A Prototype for Managing Security Goals and Claims. Composing Claims: Given a set of security measures, each resulting in a claim,
E N D
An Integrated Approach to Security Management M. Shereshevsky, R. Ben Ayed, A. Mili Monday, March 14, 2005
Target: A Prototype for Managing Security Goals and Claims Composing Claims: Given a set of security measures, each resulting in a claim, • Can we add up the claims (resulting from different security measures) in a comprehensive manner? • Can we infer specific security properties (probability/ cost of specific attacks)? • Can we expose redundancies between claims? • Can we expose loopholes/ vulnerabilities?
Target: A Prototype for Managing Security Goals and Claims Decomposing Security Goals: Given a security goal we want to achieve, • How can we decompose it into achievable sub-goals? • Dispatch sub-goals to layers/ sites to maximize impact. • Dispatch sub-goals to methods so as to control (minimize?) verification effort. • Dispatch verification effort to sub-goals so as to control (minimize?) failure cost? Issues: Representing, Reasoning about claims and goals.
Outline • Dependability: A Multi-dimensional attribute • An Integrated Approach to Reliability • A Uniform Representation for Dependability Measures • Inference Rules • General Applications • Specialization: A Prototype for Managing Security
Dependability: A Multi Dimensional Attribute Four Dimensions to Dependability: • Availability: Probability of providing services when needed. • Reliability: Probability of Failure Free Operation. • Safety: Probability of Disaster Free Operation. • Security: Probability of Interference Free Operation (exposure, intrusion, damage). Conceptually orthogonal, actually interdependent.
Availability Depends on: • Reliability. • Repair Time. Dependent on Reliability. Related to Security: DoS affects availability.
Reliability Basic Concepts: • Fault. System feature that may cause system to misbehave. • Error. Impact of fault on system state.Early warning of misbehavior. • Failure. Impact of fault on system (mis) behavior. Observable misbehavior. System feature; State feature; Output feature.
Reliability, II Basic Means: • Fault Avoidance. Fault Free Design. • Fault Removal. Debugging/ Testing. • Fault Tolerance. Error detection and recovery. Three successive, increasingly desperate, lines of defense.
Safety Key Concepts: • Hazard. System feature that makes accidents possible. • Mishap. Operational conditions that makes accident imminent. • Accident. Unplanned, undesirable event. • Damage. Loss that results from an accident.
Safety, II Key Measures: • Hazard Avoidance. Hazard Free design. • Hazard Removal. Intervene before hazards cause accidents. • Damage Limitation. Limit the impact of an accident. Three successive lines of defense.
Security Key Concepts: • Vulnerability. System feature that makes an attack possible. • Threat. Situational condition that makes an attack possible. • Exposure/ Attack. Deliberate or accidental loss of data and/or resources.
Security, II Key Measures: • Vulnerability Avoidance. Vulnerability Free design. • Attack Detection and Neutralization. Intervene before Attacks cause loss/ damage. • Exposure Limitation. Limit the impact of attacks/ exposure. Intrusion-tolerance. Three successive lines of defense.
Special Role of Security • Without security, there can be no reliability nor safety. All claims about reliability and safety become void if the system’s data and programs can be altered. • Without reliability, there can be no security. Security measures can be viewed as part of the functional requirements of the system.
Integrated Approach to Reliability • Three Broad families of methods: Fault avoidance, fault removal, fault tolerance. • Which works best? Spirited debate, dogmatic positions, jokes, etc. Pragmatic position: use all three in concert, whenever possible/ warranted.
Rationale for Eclectic Approach • The Law of Diminishing Returns. • Method effectiveness varies greatly according to specification. • Refinement calculus allows us to compose verification efforts/ claims. • Refinement calculus allows us to decompose verification goals.
Mapping Methods to Specifications • Proving: Reflexive, transitive relations. • Testing: Relations that are good candidates for oracles (reliably implemented). • Fault Tolerance: Unary relations that are reliably and efficiently implemented.
Composing Verification Effort • All methods must be cast in a common logical framework. • Refinement calculus (based on relations) offers such a common framework. • Specifications and programs are relations; refinement ordering between relations; lattice properties.
Modeling Verification Methods • Proving: Proving that P is correct with respect to specification V: P V. • Testing: Certification testing, Oracle , test data D’, successful test on D. P T, where T = D\ .
Modeling Verification Methods, II • Fault Tolerance: Upon each recovery block, check condition C, invoke recovery routine R. Because we do not which outcome we have each time, all we can claim is: P F, where F = C R.
Cumulating Results • Proving:P V. • Testing: P T. • Fault Tolerance: P F. • Lattice Identity: P (VTF). Cumulating verification results into a compre-hensive correctness claim.
Decomposing Verification Goals Premises: • A Complex Specification can be decomposed into simpler sub-specifications in a refinement-compatible manner: S = S1 S2 … SN. • We can consider each SI in turn, mapping it to the method that is most efficient for it.
A Uniform Representation for Dependability Measures Logical representation of verification results unrealistic: • Most verification results are best viewed as probabilistic, not logical, statements. • Most verification results are conditional, contingent upon different conditions. • Many verification results can be interpreted in more than one way.
Probabilistic (vs Logical) Claims • No absolute certainty. • Even highly dependable, totally formal, verification systems may fail. • We want to quantify level of confidence.
Verification Results are Conditional • Proving: Conditional on verification rules being consistent with actual compiler. • Testing: Conditional on testing environment being identical to operating environment. • Fault Tolerance: Conditional on system preserving recoverability.
Multiple Interpretations • Testing, first interpretation: P D\ , with probability 1.0. • Testing, second interpretation: P , with probability p<1.0, conditional on D being representative. Which interpretation do we choose? We do not have to choose, in fact. We can keep both, and learn to add/ cumulate them.
Characterizing Verification Claims • Goal. Correctness preservation, recoverability preservation, operational attribute. • Reference. Specification, Safety requirement, security requirement, etc. • Assumption. Implicit conditions in each method. • Certainty. Quantified by probability. • Stake. Cost of failure to meet the goal; applies to safety and security. • Method. Static vs dynamic, others. May affect the cost of performing the verification task.
Representing Verification Claims • List: (G,R,A,P,S,M) G: Goal; R: Reference; A: Assumption; P: Probability; S: Stake; M: Method.
Sample Representations Certification Testing: • First: Y( , D\, true, 1.0, , Testing), where D is successful test data. • Second: Y( , , A, 0.7, , Testing), where A is representativity of D.
Sample Representations Formal Verification: • First: Y( , R, A, 0.99, , Proving), where R is system specification, A is condition that the verification method is consistent with operational environment. • Second: Y( , R1, A, 0.995, HighC, Proving) Y( , R2, A, 0.97, LowC, Proving).
Sample Representations Verifying Initialization Sequence with respect to N: Y( , R, A B, P, , Proving), • R is system specification, • A is condition that the verification method is consistent with operational environment, • B is condition that the body of the system refines right residual of R by N (solution to NX R). Approach also applicable to regression testing; updating claims after maintenance operation. Open issue: negating, overriding, replacing previous claims?
Sample Representation Safety Measure: Y( , S, A, 0.999, HiC, Proving), Where S is safety requirement, A is consistency of verification method with operational environment, HiC is cost of failing to meet the condition P S. How does security differ from safety/ reliability: a different goal or a different reference?
Inference Rules • Collecting claims is insufficient. • Cumulating/Synthesizing claims (as we did with logical results) is impractical. • Build inference mechanisms that can infer conclusions from a set of claims. We will explore applications of this capability, subsequently.
Inference Rules Orthogonal Representation, for the purpose of enabling inferences: (S R | A) = P. Two additional cost functions: • Verification cost: Goal Ref Meth Assum Cost. • Failure cost: Goal Ref Cost.
Inference Rules Derived from refinement calculus: • (S(R1R2) | A) (SR1 | A) (SR2 | A). • (S(R1R2) | A) (SR1 | A) + (SR2 | A). • (S(R1R2) | A) max((SR1 | A), (SR2 | A)).
Inference Rules Derived from Probability Calculus: • R1R2 (SR1 | A) (SR2 | A). • (A1A2) (SR | A1) (SR | A2). • (SR | A B) = (SR | A) (SR | B) / (SR). • Bayes’ Theorem.
Inference Rules Derived from Cost functions: • R1R2 VC(,R1, M, A) VC(,R2, M, A). • (A1 A2) VC(,R, M, A2) VC(,R, M, A1). • R1R2 FC(,R1) FC(,R2).
General Applications Providing dependability: • Deploy eclectic approaches. • Dispatch goals to methods to control verification costs. • Dispatch verification effort to verification goals to control failure costs. • Budget verification cost. • Minimize / assess failure risk (probability, severity of failure).
General Applications Assessing dependability: • Deploy eclectic/ diverse approaches. • Record all measures in knowledge base. • Updating knowledge base as system evolves. • Maximize coverage. • Minimize overlap. • Budget verification cost. • Minimize / assess failure risk (probability, severity of failure).
General Applications Certifying dependability: • Deploy eclectic/ diverse approaches. • Budget certification cost. • Target certification effort (certification goal inferred from claims established by certification activity).
Security Specific Considerations • Unbounded Cost. • Distributed Measures. • Distributed Control. • Focus on Components • Refinement by Parts: Requirements RA, RB; components CA, CB.
A Prototype for Managing Security Theoretical/ Modeling steps. • Adapting/ specializing dependability model to security properties. • Exploring Security in the broader context of dependability (clarifying dependencies). • Modeling security measures (vulnerability avoidance, attack detection and neutralization, intrusion tolerance, exposure limitation, etc). • Representing such measures in a uniform knowledge base.
A Prototype for Managing Security Experimental/ Empirical steps. • Designing, Coding inference rules. • Modeling / Representing Claims, Rules. • Specifying query capabilities. • Selecting a sample system (modeling security aspects, relations to dependability). • Experimenting with query capabilities. • Build a demo.
Sample Queries • Can we infer (S R | A) P. for a given , R, A, P? • Provide lower bound for (S R | A). • Provide Upper Bound for weighted failure cost, for given , R, A.
Conclusion • Premise: Consider security in the broader context of dependability, of which it depends. • Premise: Exploit analogies between aspects of dependability to migrate methods. • Premise: Capture all measures taken during design and maintenance in uniform representation, that lends itself to inferences.
Prospects Eclectic, yet Integrated, Approach. • Allows us to model diverse approaches, and combine their results. • Allows us to measure claims. • Allows us to budget cost, risks. • Support Tool.
Relevant Wisdom Une Science a l’age de ses instruments de mesure. Louis Pasteur. A Science is as advanced as its measurement tools.