430 likes | 526 Views
Introduction to the Bounded-Retrieval Model. Stefan Dziembowski. University of Rome La Sapienza. Warsaw University. The m ain idea. Bounded-Retrieval Model : Construct cryptographic protocols where the secrets are so large that they cannot be efficiently stolen .
E N D
Introduction to the Bounded-Retrieval Model Stefan Dziembowski University of Rome La Sapienza Warsaw University
The main idea Bounded-Retrieval Model: Construct cryptographic protocols where thesecrets are so large that they cannot be efficiently stolen. D. Dagon, W. Lee, R. J. Lipton Protecting Secret Data from Insider Attacks. Financial Cryptography 2005 G. Di Crescenzo, R. Lipton and S. Walfish Perfectly Secure Password Protocols in the Bounded Retrieval Model TCC 2006 S. Dziembowski Intrusion-Resilience via the Bounded-Storage Model TCC 2006 Perfectly Secure Password Protocols in the Bounded Retrieval Model D. Cash, Y. Z. Ding, Y. Dodis, W. Lee, R. Lipton and S. Walfish Intrusion-Resilient Authenticated Key Exchange in the Bounded Retrieval Model without Random Oracles TCC 2007 S. Dziembowski On Forward-Secure Storage CRYPTO 2006
Plan • Introduction to the Bounded Retrieval Model • Motivation • An entity-authentication protocol • Connections to the BSM • Forward-Secure Storage
installs a virus retrieves some data The problem Computers can be infected by mallware! The virus can: • take control over the machine, • steal some secrets stored on the machine. Can we run any crypto on such machines?
Is there any remedy? If the virus can download all the data stored on the machine then the situation looks hopeless. Idea: Assume that he cannot do it!
installs a virus installs a virus retrieves some data retrieves some data The general model no virus no virus no virus The total amount of retrieved data is bounded!
Our goal Try to preserve as much security as possible (assuming the scenario from the previous slide). Of course as long as the virus is controlling the machine nothing can be done. Therefore we care about the periods when the machine is free of viruses.
Two variants How does the virus decide what the retrieve? Variant 1 [D06a,D06b,CDDLLW07] He can compute whatever he wants on the victim’s machine. Variant 2[CLW06,…] He can only access some individual bits on the victim’s machine (“slow memory”)
the bank We solve the following problem: How can the bank verify the authenticity of the user? An example: entity authentication the user
random Y X = f(Y,R) 11 . . . 0 Entity authentication – the solution key R 00011010011101001001101011100111011111101001110101010101001001010011110000100111111110001010 verifies example of f: Y={y1,…,ym} is a set of indices in Rf(Y,(R1,…,Rt)) = (Ry1,…,Rym) … y1 y2 ym 00011010011101001001101011100111011111101001110101010101001001010011110000100111111110001010
Security of the authentication protocol Theorem [D06,CDDLLW07] The adversary that “retrieved” a constant fraction of R does is not able to impersonate the user. (This of course holds in the periods when the virus is not on the machine.)
A related concept: theBounded Storage Model This is related to the BoundedStorage Model (BSM) [Maurer 1992] In the BSM the security of the protocols is based on the assumption that one can broadcast more bits than the adversary can store. In the BSM the computing power of the adversary may be unlimited.
s randomizer R: 000110100111010010011010111001110111 111010011101010101010010010100111100 001001111111100010101001000101010010 001010010100101011010101001010010101 can perform any computation on R, but the resultU=h(R) has to be much smaller than R X = f(K,R) The Bounded-Storage Model (BSM) –an introduction short initial keyK randomizer disappears knows: U=h(R) Eve shouldn’t be able to distinguishXfrom random X ?
How is BSM related to our model? Seems that the assumptions are oposite:
BSM vs. BRM Bounded-Storage Model: R comes from a satellite stored value U Bounded-Retrieval Model Ris stored on a computer retrieved value U
random Y X = f(Y,R) Consider again the authentication protocol verifies Observation In the authentication protocol one could use a BSM-secure function f.
Overview of the results • An entity authentication protocol • A session-key exchange protocol • in the Random Oracle Model [D06a] • in the plain model [CDDLLW07] • Forward Secure Storage [D06b] – “an encryption scheme secure in the BRM”
Plan • Forward-Secure Storage • IT-secure • computationally-secure • a scheme with a conjectured hybrid security • Connections with the theory of Harnik and Naor
Forward Secure Storage (FSS) - the motivation • One of the following happens: • The key Kleaks to theadversary • or • The adversary breaks thescheme key K message M C = E(K,M) installs a virus C retrieves C The adversary can compute M
The idea Design an encryption scheme such that the ciphertext C is so large that the adversarycannot retrieve it completely message M ciphertext C=Encr(K,M)
Forward-Secure Storage – a more detailed view The adversary to compute an arbitrary function h of C. length t ciphertext C=Encr(K,M) function h retrieved value U=h(C) length s << t K M ?
Computational power of the adversary We consider the following variants: • computational: the adversary is limited to poly-time • information-theoretic: the adversary is infinitely-powerful • hybrid: the adversary gains infinite powerafterhe computed the function h.This models the fact that the in the future the current cryptosystems may be broken!
Information-theoretic solution – a wrong idea f–secure in the BSM key f( , ) K R = X message M xor Y ciphertextin the BSMencryption ciphertext (R,Y) Shannon theorem this cannot work!
What exactly goes wrong? Suppose the adversary has some information about M. He can see (R, f(K,R) xor M ). So, he can solve (for K) the equation W = f(K,R) xor M. If he has enough information about M, and K is short, he will succed! Idea: “Blind” the message M! denote itW
A better idea key is a pair (K,Z) f( , ) K R = X Z message M xor Y ciphertext (R,Y)
Why does it work? Intuition The adversary can compute any function h of: Y is of no use for him, since it is xor-ed with a random string Z! So if this FSS scheme can be broken then also the BSM function f can be broken (by an adversary that uses the same amount of memory). R Y = f(K,R) xor M xor Z
Problem with the information-theoretic scheme The secret key needs to be larger than the message! What if we want the key to be shorter? We need to switch to the computational setting...
large small Computational FSS (with a short key) (Encr,Decr) – an IT-secure FSS (E,D) – a standard encryption scheme Encr1( )= , K M Encr( , ) K K’ E( ) , K’ M K’ is a random key for the standard encryption scheme Intuition: when the adversary learns K he has no idea about K’and therefore no idea about M.
Hybrid security What about the hybrid security? Recall the scenario: ciphertext C=Encr(K,M) h retrieved value U=h(C) M ?
Is this scheme secure in the hybrid model? The adversary retrives only the second part! Later, when she gets infinite computing power, she can recover the message M! Thus, the scheme is not secure in the hybrid model! Encr( , ) K K’ E( ) , K’ M
A scheme (Encr2,Decr2) Does there exist an FSS scheme with hybrid security (and a short key)? Idea: Generate Kpseudorandomly! (Encr,Decr) – an IT-secure FSS G– a cryptographic PRG )= Encr2( , K M Encr( ) , G(K) M
Is the scheme from the previous slide secure? It cannot be IT-secure, but is it • computationally-secure? • secure in the hybrid model? We leave it as an open problem. Looks secure... We can show the following: Very informally, it is secure if one-way functions cannot be used to construct Oblivious Transfer.
Computational security of Encr2 (1/2) there exists an adversary A that breaks the (Encr2,Decr2) scheme We show that if then one can construct an Oblivious Transfer protocol with: • an unconditional privacy of the Sender • privacy of the Receiver based on the security of the PRG G.
if if thenthen X := G(K)X random If then the adversary outputs M. Computational security of Encr2 (2/2) Simplification: assume that |M| = 1 and the adversary can guess it with probability 1. We construct an honest-but-curiousRabin OT. sender input: M receiver A computationally-limited sender cannot distinguish these cases! K Encr(X,M) U - memory of the adversary If X is random then the receiver learns nothing about M (this follows from the IT-security of Encr)! M
How to interpret this result? Which PRGs Gare safe to use in this protocol? In some sense: “those that cannot be used to construct OT”. But maybe there exist “wrong” PRGs... (see: S. Dziembowski and U. Maurer On Generating the Initial Key in the Bounded-Storage Model, EUROCRYPT '04)
Hybrid security of Encr2 • The argument for the hybrid security is slightly weaker. • We can construct only an OT-protocol with a computationally-unbounded algorithm for the Receiver... • This is because the receiver has to simulate an unbounded adversary. receiver
A complexity-theoretic view Suppose the adversary wants to know if a given C is a ciphertext of some message M. NP-language: L = {C : there exists K such that C = Encr(K,M)}. standard encryption FSS Can we compress Cto someU, s.t. |U| << |C| so that later we can decide if C is inLbasing on U, and using infinite computing power? is C in L?
The theory of Harnik and Naor This question was recently studied in: Danny Harnik, Moni Naor On the Compressibility of NP Instances and Cryptographic Applications FOCS 2006 See also: Bella Dubrov, Yuval Ishai On the Randomness Complexity of Efficient Sampling STOC 2006
Compressibility of NP Instances Informally, an NP language L is compressible if there exists an efficient algorithm that compresses every string X to a shorter string U, in such a way that an infinitely-powerful solver can decide if X is inL basing only on U. Proving that some language is incompressible (from standard assumptions) is an open problem. . This is why showing an FSS scheme provably-secure in the hybrid model may be hard!