1 / 31

Information Security – Theory vs. Reality 0368-4474-01, Winter 2011 Lecture 7: Information flow control

Information Security – Theory vs. Reality 0368-4474-01, Winter 2011 Lecture 7: Information flow control. Eran Tromer Slides credit: Max Krohn , MIT Ian Goldberg and Urs Hengartner , University of Waterloo. Information Flow Control.

iden
Download Presentation

Information Security – Theory vs. Reality 0368-4474-01, Winter 2011 Lecture 7: Information flow control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Information Security – Theory vs. Reality 0368-4474-01, Winter 2011Lecture 7: Information flow control EranTromer Slides credit:Max Krohn, MITIan Goldberg and UrsHengartner, University of Waterloo

  2. Information Flow Control An information flow policy describes authorized paths along which information can flow For example, Bell-La Padula describes a lattice-based information flow policy

  3. Example Lattice (“Top Secret”, {“Soviet Union”, “East Germany”}), (“Top Secret”, {“Soviet Union”}) (“Secret”, {“Soviet Union”, “East Germany”}) (“Secret”, {“Soviet Union”}) (“Secret”, {“East Germany”}) (“Unclassified”, )‏

  4. Lattices Dominance relationship ≥ defined in military security model is transitive and antisymmetric Therefore, it defines a lattice For two levels a and b, neither a ≥ b nor b ≥ a might hold However, for every a and b, there isa lowest upper bound u for which u ≥ a and u ≥ b, anda greatest lower bound l for which a ≥ l and b ≥ l There are also two elements U and L that dominate/are dominated by all levels In example, U = (“Top Secret”, {“Soviet Union”, “East Germany”})L = (“Unclassified”, )‏

  5. Information Flow Control Input and output variables of program each have a (lattice-based) security classification S() associated with them For each program statement, compiler verifies whether information flow is secure For example, x = y + z is secure only if S(x) ≥ lub(S(y), S(z)), where lub() is lowest upper bound Program is secure if each of its statements is secure

  6. Implicit information flow Conditional branches carry information about the condition: secret bool a; public bool b; … b = a; if (a) { b = 0; } else { b = 1; } Possible solution: assign label to program counter and include it in every operation. (Too conservative!)

  7. Issues • Label creep • Example: false flows • Example: branches • Declassification • example: encryption

  8. Implementation approaches • Language-based IFC (JIF & others) • Compile-time tracking for Java, Haskell • Static analysis • Provable security • Label creep (false positives) • Dynamic analysis • Language / bytecode / machine code • Tainting • False positives, false negatives • Hybrid solutions • OS-based DIFC (Asbestos, HiStar, Flume) • Run-time tracking enforced by a trusted kernel • Works with any language, compiled or scripting, closed or open

  9. Distributed Information Flow Control Alice’s Data Bob’s Data P DB Gateway Alice’s Data Alice’s Data 1. Data tracking 2. Isolated declassification

  10. Information Flow Control (IFC) • Goal: track which secrets a process has seen • Mechanism: each process gets a secrecylabel • Label summarizes which categories of data a process is assumed to have seen. • Examples: • { “Financial Reports” } • { “HR Documents” } • { “Financial Reports” and “HR Documents” } “tag” “label”

  11. Tags + Labels change_label({Finance}); Process p tag_t HR = create_tag(); change_label({}); Any process can add any tag to its label. change_label({Finance,HR}); Sp = { Finance, HR } Sp = { Finance } Sp = {} DIFC Rule: A process can create a new tag; gets ability to declassify it. change_label({Finance}); Dp = {} Dp = { HR } Same as Step 1. DIFC: Declassification in action. Finance HR Legal Universe of Tags: SecretProjects

  12. DIFC OS Alice’s Data Alice’s Data p q declassifier Sp = {a} Sd = {a}Dd = {a} Sq = {} Sq = {a} KERNEL

  13. Communication Rule P Process p Process q Sp = {HR} Sq = {HR, Finance} p can send to qiffSpÍ Sq

  14. Question: • What interface should the kernel expose to applications? • Easier question than “is my OS implementation secure” • Easy to get it wrong!

  15. Approach 1: “Floating Labels” [IX,Asbestos] p q Sq = {} Sq = {a} Sp = {a} KERNEL

  16. Floaters Leaks Data b0 attacker Leak file Alice’s Data S = {} b1 S = {} S = {a} S = {} S = {a} S = {} S = {a} b2 1001 0000 1001 S = {} S = {a} b3 S = {}

  17. Approach 2: “Set Your Own” [HiStar/Flume] Sp = {a} Op = {a-,a+} Sq = {} Oq = {a+} Sq = {a} Oq = {a+} p q KERNEL Rule: SpÍ Sq necessary precondition for send

  18. Case study: Flume • Goal: User-level implementation • apt-get install flume • Approach: • System Call Delegation [Ostia by Garfinkel et al, 2003] • Use Linux 2.6 (or OpenBSD 3.9)

  19. System Call Delegation open(“/hr/LayoffPlans”, O_RDONLY); Web App glibc Linux Kernel Layoff Plans

  20. System Call Delegation open(“/hr/LayoffPlans”, O_RDONLY); Web App Web App Flume Reference Monitor Flume Libc Linux Kernel Layoff Plans

  21. System Calls • IPC: read, write, select • Process management: fork, exit, getpid • DIFC: create tag, change label, fetch label, send capabilities, receive capabilities

  22. How To Prove Security • Property: Noninterference

  23. Noninterference[Goguen & Meseguer ’82] Experiment #1: p q Sq = {} Sp = {a} = “HIGH” “LOW” Experiment #2: q p’ Sq = {} Sp’ = {a}

  24. How To Prove Security (cont.) • Property: Noninterference • Process Algebra model for a DIFC OS • Communicating Sequential Processes (CSP) • Proof that the model fits the definition

  25. Proving noninterference • Formalism (Communicating Sequential Formalisms) • Consider any possible system (with arbitrary user applications) • Induction over all possible sequences of moves a system can make (i.e., traces) • At each step, case-by-case analysis of all system calls in the interface. • Prove no “interference”

  26. Results on Flume • Allows adoption of Unix software? • 1,000 LOC launcher/declassifier • 1,000 out of 100,000 LOC in MoinMoin changed • Python interpreter, Apache, unchanged • Solves security vulnerabilities? • Without our knowing, we inherited two ACL bypass bugs from MoinMoin • Both are not exploitable in Flume’s MoinMoin • Performance? • Performs within a factor of 2 of the original on read and write benchmarks • Example App: MoinMoin Wiki

  27. Flume Limitations • Bigger TCB than HiStar / Asbestos • Linux stack (Kernel + glibc + linker) • Reference monitor (~22 kLOC) • Covert channels via disk quotas • Confined processes like MoinMoin don’t get full POSIX API. • spawn() instead of fork()&exec() • flume_pipe() instead of pipe()

  28. DIFC vs. Traditional MAC

  29. What are we trusting? • Proof of noninterference • Platform • Hardware • Virtual machine manager • Kernel • Reference monitor • Declassifier • Human decisions • Covert channels • Physical access • User authentication

  30. Practical barriers to deployment of Information Flow Control systems • Programming model confuses programmers • Lack of support: • Applications • Libraries • Operating systems • People don’t care about security until it's too late

  31. Quantitative information flow control • Quantify how much information (#bits) is leaking from (secret) inputs to outputs • Approach: dynamic analysis (extended tainting) followed by flow graph analysis • Example: battleship online game

More Related