230 likes | 345 Views
Cryptographic Implementation of Confidentiality and Integrity Properties. Work in progress… Cédric Fournet Tamara Rezk INRIA-MSR Joint Centre Dagstuhl Seminar February 2007. Motivation.
E N D
Cryptographic Implementation of Confidentiality and IntegrityProperties Work in progress… Cédric Fournet Tamara Rezk INRIA-MSR Joint Centre Dagstuhl Seminar February 2007
Motivation • Need for simple programming language abstractions for confidentiality and integrity – and their robust cryptography implementation • Find a relation between high-level security goals and the usage of cryptographic protocols
Our Goal • We want to define a compiler in such a way that cryptographic and distribution issues of the implementation remain transparent to the programmer • The programmer specifies a security policy (confidentiality and integrity of data) • If the source program is typable for one policy, our compiler generates low-level, well-typed cryptographic code
HL HH LL LH Confidentiality and Integrity Language-based Security Confidentiality and integrity policies are specified using labels from a security lattice (SL, ≤), e.g. confidentiality (Readers) integrity (Writers) x:= y is safe if Label(y) ≤ Label(x) Confidentiality (who can read): Data in y can be read at least by readers of x Integrity (who can modify): Data in y is more “trusted” (higher integrity) than data in x
Simple Confidentiality and Integrity View of low confidentiality data does not depend on secret data and high integrity data cannot be affected by data that can be manipulated by the adversary
Interaction with the adversary Input/output observation (passive case) s1 ~ s2 then P(s1) ~ P(s2) Adversary interacts with the system (active case) s1 ~ s2 then P[A](s1) ~ P[A](s2) [Robust Declassification, Zdancewic & Myers 01]
Implementation with shared memory Private communication, h1 h2 have compatible confidentiality and integrity policies h1:=h2 l_1: = h2 h:=l_2 All communication is through shared memory (the adversary has access to shared memory)
Implementation with shared memory Private communication, h1 h2 have compatible confidentiality and integrity policies h1:=h2 ks,kv: = Gs; ke,kd:= Ge; l_1: = Enc(h2,ke); l_2: = S(ks,l_1); try{ h:=V(kv,l_2) h_1:=Dec(h, kd) } catch { skip} All communication is through shared memory (the adversary has access to shared memory)
Our contribution (coming soon…) • Simple typed language with information-flow security, both integrity and secrecy • Target language with crypto primitives. All communication is on shared memory. • We equip this language with a type system for checking its usage of cryptography. • Against adaptive chosen-ciphertext attacksand adaptive chosen-message attacks • We give a typed translation from the simple language to the target language.
The language • An imperative language with shared memory and probabilistic (polytime) functions x: = f(x_1, … x_n) Including Ge, E, D, Gs, V, S Special command try h:= V(h1,k) catch c
The semantics as Markov chains Prob(s1,s2) Configuration s1 from a set S Configuration s2 from a set S distTransformer: Distr(S) x Distr(S) distTransformer (dist) (s2) = ∑s1prob(s1,s2) dist(s1)
Indistinguishable ensembles b:= {0,1}; if b =1 then v: = D1(n); else v: = D2(n); Distinguisher (v); return g=b
Indistinguishable ensembles (passive adversaries) Computational Non-Interference: First introduced by Peeter Laud in Esop 01 b:= {0,1}; if b =1 then initial: = D1; final:=P(initial); else initial: = D2; final:=P(initial); A (low(final)); return g= b
Computational NI for Active Adversaries • P1, ..Pn polynomial commands • A polynomial time algorithm, control of the scheduler, access to low memory b:= {0,1}; if b =1 then initial: = D1; else initial: = D2; A[P1,..Pn]; return g=b
Cryptographic assumptions (also written in our language) • Assumption: the encryption scheme (Ge, E, D) provides indistinguishability under adaptive chosen ciphertexts attacks (Ind-CCA) CCA b:= {0,1};e,d:= Ge; A;return g= b A has access to the security parameter, to e, to mA can also call the two oracle commands: EO(v0,v1) if b then m:=E(v0,e) else m:=E(v1,e); log:= log + m; DO(m) if (m in log) then l:=0 else l:=D(m,d)
Cryptographic assumptions(also written in our language) Assumption: the signature scheme (Gs, S, V) is secure against forgery under adaptive chosen message attack (CMA) CMA s,v:= Gs; A; if m in log then return 0 else try m’:=V(x,v) catch return 0; return m’=m; A has access to v (but not to s) A can also call an oracle commands for signing: S(m) Log:= log +m; S(s,m);
Types to keep integrity In a source program, a variable x has a label l to express confidentiality and integrity. In the target, x will have type l, Data and variables for keys will have additional types given by T T: = | Data | EK T K | DK T K | VK (l,T) | SK (l,T) | Enc T K | Sgn T
Typability Preserving Compiler Theorem (Typability Preservation). Let c be a source program (communication on private channels). If c is typable then its translation to the target language (communication on shared memory) is typable Corollary (Computational soundness). Let c be a source program (communication on private channels). If c is typable then its translation satisfies computational non-interference for active adversaries.
Conclusion and Further Work • More efficient protocols for the translation: • Can we use less keys? • Less signatures? (types should then be more expressive) • Shared keys? (protocols to establish keys) • Cryptographic back-end for Jif Split? • Mechanization of proofs • Concurrency
Related Work • Non-interference Goguen & Meseguer 82, Bell & LaPadula 76, Denning 76 • Declassification Principles and dimensions by Sabelfeld & Sands 05 Robust Declassification Zdancewic & Myers 01 Enforcing Robust Declassification Myers, Sabelfeld, Zdancewic 04 • Secure information flow and Cryptography Laud 01, Backes & Pfitzmann 02 03 • Secure implementations Jif Split Myers & Zheng et al 01 Cryptographic DLM by Vaughan & Zdancewic 07