220 likes | 326 Views
Higher-Order π -RAT: A Calculus for Trusted Computing. Andrew Cirillo joint work with James Riely DePaul University, CTI Chicago, IL USA. Trustworthy Global Computing 2007. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A .
E N D
Higher-Order π-RAT:A Calculus for Trusted Computing Andrew Cirillo joint work with James Riely DePaul University, CTI Chicago, IL USA Trustworthy Global Computing 2007 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAA
Trust: “The expectation that a device will behave in a particular manner for a specific purpose.” - TCG Specification Architecture Overview
Example Is customer hacked or robot? Can I trust this server with my sensitive data? Privacy Sensitive Data Alice Data with Monetary Value BobsTickets.com Expects That Bob: Complies with a Particular Privacy Policy Running Server Software at Latest Patch Level Expects That Alice: No Spyware to Intercept e-tickets Request Coming from a Human User
Trust, Behavior and Static Analysis • Security Depends on the Behavior of Others • Trust = Expectation that Other Will Behave According to X • Trustworthy = Other Guaranteed to Behave According to X • Behavioral Specifications Include: • Type/Memory Safety, Non-Interference • Compliance with MAC, DAC or Ad-Hoc Policies • Static Analysis Used to Guarantee Behavior • Type Systems (Maybe False Negatives) • Bounded Model Checking (Maybe False Positives) • In Open Distributed Systems: • Safety Depends on Properties of Remote Systems • Need to Authenticate Code
Very Brief Overview of Remote Attestation • Integrity Measurement • Metric for Identifying Platform Characteristics • E.g. SHA-1 Hash of Loaded Executable Files (#) • Platform Configuration Register (PCR) • Protected Registers for Storing Measurements • Segmented into 5 Levels (0-4) • Levels 3 and 4 Protected by Hardware • Each Level Protected from Subsequent Levels • Measurements Stored In Registers [#(BIOS)|#(TSS)|#(OS)|#(App.EXE)] • Attestation • Contents of PCR + Arbitrary Payload, Signed by TPM Key
Hypothesis: We can use attestation to solve trust issues in open distributed systems. • Solution: Enforce access control based on behavioral properties established through static analysis.
Our Solution: HOπ-RAT • Distributed Higher-Order Pi Calculus • Locations Identify Executable(s) • Access Control Logic based on Code Identity • Include Primitive Operations for Loading Code and Building Attestations • Focus is on Concepts Relating to Code Identity - Is Abstract w.r.t. Attestation Protocol
Our Solution: HOπ-RAT Terms M,N::= n |x| (M,N) | (x)P Processes P,Q::= 0 |M!N|M?N|new n; P |P|Q| split (x,y) = M; P | M N Configurations G,H ::= l[P] | new n; G|G|H split (x,y) = (M,N); P P{x := M}{y := N} ((x)P)N P{x := N} Structural Rule l[P|Q] ≡ l[P] | l[Q] Starting Point: Distributed HOπ with Pairs
Our Solution: HOπ-RAT • Interpretation of Locations (l) • Physical Addresses (Dπ, Distributed Join Calculus, …) • Principals (Fournet/Gordon/Maffeis, DaISy, …) • Code Identity (This Talk) • Processes Located at Measurements • P Running on Host with [tss|myos|widget] in PCR (tss|myos|widget)[P] • On a well-functioning trustworthy system, this means: • widget = #(M) for some executable M • P is a residual of M
Access Control: Overview • Access Control Logic • Code Identities (Represent Hashes of Executables) • Security Classes (Represent Static Properties) • Compound Principals • Policy Consists of: • Dynamic: Map Identities to Properties • Static: Security Annotations on Channels • Partial Order (=>) Ranks Principals by Trustednessa la Abadi,Burrows,Lampson,Plotkin. ’93 (hereafter ABLP)
Access Control: Principals and Types Principals A,B ::= a |αidentities/classes|0|any bottom/top|A|Bquoting|A˄B|A˅Band/or Processes P,Q::= ... |a => α Σ=a1 => α1| ... | an => αnTypes T,S::= Un|T×S | T→PrcUn/pairs/abs| Ch‹A,B›(T) read-write| Wr‹A,B›(T) write only Encodes measurements New Stuff
Access Control: Channel Types Content Type Readers Writers Authorizations Specified in Type Annotationsnew n : Ch‹A,B›(T); Indirection via ABLP-style Calculus, e.g.Σ├─ a => αimpliesΣ├─ a => α ˅ β Subtyping Uses Principal Calculus, e.g. Σ├─ Wr‹A,B›(T) <: Wr‹A’,B’›(T) ifΣ├─ A => A’ andΣ├─ B’ => B
Access Control: Example Ex. Writers must have both prop1 and prop2 properties,new n : Wr‹(prop1˄prop2), any›(T) Then, if we have:(...|widget)[n!M] Then it should be the case that: Σ├─ widget => (prop1˄prop2)
Access Control: Runtime Error Processes P,Q::= ... | wr-scope N is C|rd-scope N is C New Stuff Runtime ErrorsΣ ► A[wr-scope n is C] | B[n!M] if not Σ |- B => CΣ ► A[rd-scope n is C] | B[n?(x)M] if notΣ |- B => C Distinction Between Possession and Use
Our Solution: HOπ-RAT Terms M,N::= ... | [(x)P] |{M:T @ a} Processes P,Q::= ... | load MN | let x = attest(N:T); M|check {x:T} = N; M host[load [(x)P] N] (host|a)[P{x:=N}] ifa = #([(x)P]) a[let x = attest(N:T); M] a[M{x:={N:T @ a}] b[a => cert]| b[check {x:T} = {N:T @ a}; M] b[a => cert]| b[M{x:=N}] New Stuff
Type System: Overview • The cert Security Class Indicates • Type Annotations in Attested Messages are Accurate • Will Not Expose Secret-Typed Data to Attackers • Will Not Write/Read Channels Without Authorization • Main Components: • Classify Data with Kinds (PUB/PRV/TNT/UN) • Subtyping • Constraints on Well-Formed Policies • Correspondence Assertions • A la Gordon/Jeffrey 2003 and Haack/Jeffrey 2004
Type System: Attacker Model Processes P,Q::= ... | spoof B; P|let x1,…,xn = fn(M); P a[spoof b; P] (a|b)[P]a[let x = fn([(y)n!unit]); P] a[P{x:=n}] • Attacker Model • Create Attestations with Bad Type Annotations • Falsify Subsequent Measurements* • Extract Names from Executables* • Spy on (i.e. Debug) Running Child Processes • New Stuff (Attackers Only)
Type System: Results Definition: A configuration is considered a Σ-Initial Attacker if it is of the form A1[P1] … An[Pn] where for all Ai, Σ does not map Ai to cert, and all Pi contain no attestations. Definition: A configuration G is robustly Σ-safe if the evaluation of G|H can never cause a runtime error relative to Σ for an arbitrary Σ-initial attacker H. Theorem: Let Δ be a type environment where every term is a channel of kind UN. If Σ;Δ├─ G , then is robustly Σ-safe.
Conclusions • We Have: • Proposed a New Extension to HOπ for Modeling Trusted Computing • Enable Access Control based on Static Properties of Code • Developed a Type System for Robust Safety • For Future Work We Are Considering: • Internalizing Program Analysis (e.g. model certifying compilers) • Using Attestations to Sign Output of Certifiers • Exploring an Implementation for Web Services
Thanks! See tech. rep. at http://reed.cs.depaul.edu/acirillo (next week)
Static Analysis for Open Distributed Systems • Heterogeneous/Open Systems • Components under the Control of Different Parties • Different Trust Requirements • Who’s Analyzing Who? • Problems for Hosts • Safety Depends on Code Received From Outside • Code Distributed in Compiled Format, Analysis Intractable • Solution: Bytecode Verification or Proof-Carrying Code • Problems for Remote Parties • Safety Depends on Properties of Code on Remote System • Need to Authenticate the Remote Code • Solution: Trusted Computing with Remote Attestation