1 / 33

Possibilistic Information Flow

Possibilistic Information Flow. Possibilistic Information Flow Control. Information flow control requires knowledge of the system Approaches differ in the granularity of system descriptions: Traces as system specification trace based information flow control Programs as system specification

marcie
Download Presentation

Possibilistic Information Flow

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Possibilistic Information Flow

  2. Possibilistic Information Flow Control • Information flow control requires knowledge of the system • Approaches differ in the granularity of system descriptions: • Traces as system specification • trace based information flow control • Programs as system specification • language based information flow control

  3. Confidentiality vs. Integrity • Confidentiality goals constrains effects of a datum d • if causal paths emanating from d never reach othen d‘s value is confidential from o • Noninterference-style formalization:variation in d causes no variation in o • Integrity goals constrain causes of a datum d • If causal paths emanating from c never reach dthen d‘s integrity is not vulnerable to c • Noninterference-style formalization:variation in c causes no variation in d • Integrity is to the past as confidentiality is to the future Controlling causal paths is a common generalization of confidentiality and integrity by Joshua Guttman

  4. Languaged Based Information Flow Control • System is „specified“ in a programming languagetypically: sequences, conditionals, loops, procedures • Approaches by Volpano & Smith, Sabelfeld, … • Expressions and statements are labeled by types • Calculus rules define how to propagate types along the program • Program is secure if the program can be typed • Approaches are incomplete • Special course on language based information flow control by M. Backes

  5. Language Based Information Flow Control: Direct Information Flow • Attaching types to all constructs that stores or provide information • controlling direct information flow • variable to store data has to have sufficient clearance (type) • xH := aL is allowed • xL := aH is not allowed ! Constraint: clearance of variable has to dominate the classifiction of the expression

  6. Language Based Information Flow Control: Indirect Information Flow • Attaching types to all program fragments • to control indirect information flow • who can observe the run of this fragment ? • if xH = truethenyH := trueelse yH := false;is allowed • if xH = true then yL := true else yL := false;is not allowed ! • because observer knows which branch has been executed • yL := false has type „L cmd“ while yH := false has „H cmd“ • Constraint: guard can only be as high as the minimum of the branches

  7. Traced Based Information Flow Control • Possibilitistic information flow control: • System is specified as a set of traces • Traces consist of events: either visible or not visible to an observer • What can an observer deduce about non-visible events? • Observation has to be consistent with more than one trace! • Non-interference as closure properties on set of traces • Proofs by induction (so called „unwinding conditions“) • Impractical for dynamic service compositions !

  8. Possibilistic Information Flow for Non-Deterministic Systems Non-deterministic systems have no output functions :output: S  A  O Thus: definition of a secure system is no longer applicable A system is secure iff  a  A and   A* :output(run(s0, ), a) = output(run(s0, purge(, dom(a))), a)

  9. Event Systems • Possibilistic models for nondeterministic systems • System is defined as a set of (acceptable) tracese.g { <e11, e12, e13, …>, < e21, e22, … >, … } e12 … e11 e13 e21 e22 … e31 … …

  10. Event Systems An event trace system S is a tuple <E, I, O, Tr> where: • E is a set of events • I  E is a set of input events • O  E is a set of output events with I  O =  • Tr  E* is a set of (admissible) traces (also called traces(S) ) A system is input total iff   : traces(S):  e : I  . e  traces(S) (  . e means „ extended by e“ )

  11. Security in Possibilistic Systems • Knowing the system (i.e. set of admissible traces) • Observing only public (green) events (arrows) Can we deduce anything about possible occurrences of secret (red) events (arrows) ?

  12. Security in Possibilistic Systems v1 v2 v3 Observed part Pin 4712 v3 v2 v1 c2 c3 This trace would have caused the same observations Security is a closure property on sets of traces Pin 4711 v2 v3 c1 v1 Real trace If both traces are admissible then the observer cannot distinguish both situations!

  13. What Degree of Security ? Security for an event trace system demands:if there is a trace 2 Tr then there are other traces ‘ 2 Tr that cause the same observation as  But: how many other traces and what types of traces are necessary to guarantee security ? Some attempts …

  14. Non-Inference (O‘Halloran) c1 v1 c2 v2 v1 v2 v1 • “Purging red events should return admissible traces” •   2 Tr : 9‘ 2 Tr : ‘|H = <> and |L = ‘|L For instance, if v1 c1 and are traces in Tr then also these: and

  15. Non-Inference (O‘Halloran) Property is too strong: (prohibits copying low events to high level) but also: Property is too weak: (deducable that secret events did not happen) c1 v1 is „insecure“ c2 v1 v2 c1 c1 v1 v2 c2 v1 is „secure“ but observer knows that there is no high event before v1 v1 c1

  16. Separability (McLean) c1 v1 c2 v2 v1 v2 c1 c2 v1 c1 c2 v2 v1 v2 c2 c1 c1 v1 c2 v2 „An arbitrary interleaving of high- and low-level subtraces is a trace“:   2 Tr : Interleave( |H, |L)  Tr For instance, if is a trace in Tr then also these: …

  17. Separability (McLean) Separability allows no (possibilistic) information flow But: Separability is too strong: • All high level traces have to be compatible with any low level observation! • Low level activities cannot influence high-level activities

  18. … even more: • general noninterference [McCullough] • non-deducibility [Sutherland86], • restrictiveness [McCullough87], • forward correctability [Johnson&Thayer88], • non-deducibility output security [Guttman&Nadel88], • non-inference [O´Halloran90], • nondeducibility on strategies [Wittbold&Johnson90], • separability [McLean94], • generalized non-interference [McLean94], • perfect security property [Zakinthinos&Lee97], • pretty good security property [Mantel00], ...

  19. Framework MAKS (Mantel00) • Definition of various elementary closure properties on set of traces as „basic security predicates (BSP)“ • General pattern of basic security predicates • Security predicate is a conjunction of basic security predicates: BSP1Æ BSP2Æ BSPn

  20. View (of an Observer) in MAKS View V is a disjoint partition of E in three subsets V, N, C : • What is confidential? set CµE of events • What is visible? set VµE of events Obviously: C ∩ V= ; -- otherwise: security breach

  21. Basic Security Predicates • General pattern: For all traces 2Tr: If  has a specific form and we manipulate the C-events of in a given way (pertubation) … …then we obtain a new trace ‘ 2Tr only by adjusting N events (corrections) • Perturbation affects C-events, correction affects N-events required perturbation permitted correction  Tr t E* ´ Tr

  22. Formalizing Noninterference • Information flow property is a pair IFP = (VS, SP) • VS is a set of views • SP is a security predicate • Security predicate SP • Closure condition on set of traces (parametric in view) • Defined in a modular way: SPV(Tr) iff BSPV(Tr)  BSP’V(Tr)  … • Basic security predicate BSP • Closure condition on set of traces (parametric in view) • A system (E,I,O,Tr) satisfies IFP = (VS, SP) • iff SPV(Tr) for all v VS

  23. Examples of Basic Security Predicates n1 v2 c1 v1 c2 n1 v2 c1 v1 v2 c1 v1 Backward Strict Deletion (BSD): 8, 2 E*, c2 C: • (. <c> . )2 Tr Æ|C = <> • ! 9‘ 2 E*: ( . ‘) 2 Tr Æ‘|C = <> Æ|V = ‘|V • „..if we remove the last high event c of a trace (. <c> . )then we can find another possible continuation ‘ for  that causes the same visible behaviour as , i.e. |V = ‘|V ...“ Original trace Pertubation Correction resulting in a trace

  24. Examples of Basic Security Predicates n1 v2 c1 v1 c2 v2 c1 v1 Backward Strict Insertion (BSI): 8, 2 E*, c2 C: • (. )2 Tr Æ|C = <> • ! 9‘ 2 E*: ( . <c> . ‘) 2 Tr Æ‘|C = <> Æ|V = ‘|V • „if we insert a high event c into a trace after  (containing all high events) then we can find another possible continuation ‘ for  that causes the same visible behaviour as , i.e. |V = ‘|V“ Original trace c2 v2 c1 v1 Pertubation Correction resulting in a trace

  25. Examples of Basic Security Predicates Backward Strict Insertion (BSIAρ): let µ E*: 8, 2 E*, c2 C: • (. )2 Tr Æ|C = <> • Æ( 9‘: ‘. <c> 2 Tr Æ‘|  = | ) • ! 9‘ 2 E*: ( . <c> . ‘) 2 Tr Æ‘|C = <> Æ|V = ‘|V • „if we insert a high event c at a reasonable position into a trace then we can find another possible continuation ‘ for  that causes the same visible behaviour as , i.e. |V = ‘|V“

  26. Verification Technique: Unwinding • Unwinding theorem reduces the verification of information flow properties to more local conditions involving only single transitions • Unwinding technique • Idea: reformulate requirement by local conditions • Unwinding conditions: requirements of transitions • Theorem: if unwinding conditions hold then BSP holds

  27. Unwinding at a glance • State-event systems: SES = (S, s0, E, I, O, T) • S set of states, initial state s0 • E set of events • I,O ⊆ E input/output events • T: S  E ↪S transition relation • Classify states by unwinding relation ⋉ • s ⋉ s’  observations possible in s are possible in s’ • s⋉ s’ and s’ ⋉ s  states are indistinguishable • Example: BSD  (s,c,s’)  T  s´ ⋉ s • Observations after confidential event has occurred must be possible without this occurrence • ⋉ does not need to be an equivalence relation

  28. Unwinding Conditions in MAKS c s1 s2 ⋉  s2 s1 ⋊ ⋊ s4 s3 ’ Specific unwinding conditions for different BSPs: lrf: s1!c s2 implies s2 ⋉ s1 c lrb:9s2 : s1!c s2Æ s1 ⋉ s2 s2 s1 ⋉ osc: 82 (E \ C) :s1! s2 Æ s1 ⋉ s3implies 9‘ 2 (E \C)*:‘|V = |VÆ s3!‘ s4Æ s2 ⋉ s4 If a state-event system satisfies lrf and osc then the event system is BSD If a state-event system satisfieslrb and osc then the event system is BSI

  29. Unwinding Conditions (BSD) T* T*  c s1 s2 T* s’ s ⋉ ⋉ ⋉ ⋉ ⋉  T* s  s2 s1 ⋊ ⋊ s4 s3 ’ If a state-event system satisfies lrf and osc then the event system is BSD • BSD demands: • Quantifying over all states s • s ⋉s’  observations possible in s are possible in s’ • using lrf: and osc:

  30. Composition • Possible traces of composed system • Interleave possible traces of components • Synchronization on occurrences of shared events • Must occur in both components at the same time • Definition: ES1 and ES2 are composable iff • E1  E2 (I1  O2)  (I2  O1) • Definition: (E,I,O,Tr) = ES1||ES2 • E = E1  E2 • I = (I1\O2)  (I2\O1) • O = (O1\I2)  (O2\I1) • Tr = {E* | |E1Tr1  |E2Tr 2}

  31. Why Compositionality Results? • Verification of complex systems is difficult • Verification of individual components is much simpler • Compositionality results for reducing complexity • Re-using components • Component has been verified already • Compositionality results for re-using proofs • Integrating components of the shelf • Vendors do not want to reveal component internals to customers and competitors • Compositionality result for making verification possible • Parallelizing development process • Prerequisite: multiple independent verification tasks • Compositionality results for faster development

  32. The Key Problem • Verifying a component locally • Ensures that local perturbations can be locally corrected • What if corrections involve interface events • Local corrections in one component might cause perturbations for other components • This needs to be solved by a compositionality result • Possible solutions (well-behaved composition) • Local corrections do not cause perturbations • Local corrections in one component might cause perturbations that can be locally corrected • Local corrections in both components might cause perturbations that can be locally corrected • Termination must be ensured

  33. BSPs are Partially Preserved by Composition Compound system vs. individual components: R BSD BSI BSIA FCD FCI R X BSD X BSI X X BSIA X X FCD X X FCI X X X I.e.: a composed system is BSI if all its components are BSI and BSD a composed system is BSD if all its components are BSD (under some techncial preconditions)

More Related