1 / 56

Foundations of Cryptography

Foundations of Cryptography. Lecture 7: Message Authentication in the Manual Channel Model. Lecturer: Gil Segev. Diffie-Hellman Key Agreement. g x. Alice. Bob. Alice and Bob wish to agree on a secret key. g y. Both parties compute K A,B = g xy. DDH assumption:. c.

Download Presentation

Foundations of Cryptography

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Foundations of Cryptography Lecture 7: Message Authentication in the Manual Channel Model Lecturer: Gil Segev

  2. Diffie-Hellman Key Agreement gx Alice Bob • Alice and Bob wish to agree on a secret key gy Both parties computeKA,B = gxy • DDH assumption: c {(g, gx, gy, gxy)}  {(g, gx, gy, gc)} for random x, y and c. Computational Indistinguishability

  3. Diffie-Hellman Key Agreement gx Alice Bob • Alice and Bob wish to agree on a secret key gy Both parties computeKA,B = gxy • DDH assumption:KA,Bas good as a random secret • Secure against passive adversaries • Eve is only allowed to read the sent messages • Can now use KA,B as a one-time pad: KA,B z Alice Bob

  4. KA,E z KE,B z Alice Eve Bob Diffie-Hellman Key Agreement gx gy • Suppose now that Eve is an active adversary • “man-in-the-middle” attacker Alice Eve Bob ga gb KA,E = gxa KE,B = gby • Completely insecure: • Eve can decrypt z, and then re-encrypt it

  5. Diffie-Hellman Key Agreement gx gy • Suppose now that Eve is an active adversary • “man-in-the-middle” attacker Alice Eve Bob ga gb KA,E = gxa KE,B = gby • Solution - Message authentication: • Alice and Bob authenticate gx and gy • Problem - Authentication requires setup, such as: • Shared secret key • Public key infrastructure

  6. PracticalScenario

  7. Pairing of Wireless Devices gx Scenario: • Buy a new wireless camera • Want to establish a secure channel for the first time • E.g., Diffie-Hellman key agreement gy

  8. Pairing of Devices Wireless Cable pairing • Simple • Cheap • Authenticated channel “I thought this is a wireless camera…”

  9. Pairing of Wireless Devices Wireless pairing Problem: Active adversaries (“man-in-the-middle”)

  10. Pairing of Wireless Devices Wireless pairing gy gx ga gb Problem: Active adversaries (“man-in-the-middle”)

  11. ^ m Message Authentication • Assure the receiver of a message that it has not been changed by an active adversary m Alice Eve Bob

  12. ^ m = gb || gy Pairing of Wireless Devices gy gx ga gb m = gx || ga

  13. ^ m Message Authentication • Assure the receiver of a message that it has not been changed by an active adversary m Alice Eve Bob • Without additional setup: Impossible !! • Public Key: Signatures • Problem: No trusted PKI Solution: Manual Channel

  14. The Manual Channel gy gx 141 ga gb 141 User can compare two short strings

  15. Manual Channel Model m Alice Bob s . . . s • Insecure communication channel • Low-bandwidth auxiliary channel: • Enables Alice to “manually” authenticate one short string s s Interactive Non-interactive • Adversarial power: • Choose the input message m • Insecure channel: Full control • Manual channel: Read, delay • Delivery timing

  16. Manual Channel Model m Alice Bob s . . . s • Insecure communication channel • Low-bandwidth auxiliary channel: • Enables Alice to “manually” authenticate one short string s s Interactive Non-interactive Goal:Minimize the length of the manually authenticated string

  17. Manual Channel Model m Alice Bob s . . . s s • No trusted infrastructure, such as: • Public key infrastructure • Shared secret key • Common reference string • ....... Suitable for ad hoc networks: • Pairing of wireless devices • Wireless USB, Bluetooth • Secure phones • AT&T, PGP, Zfone • Many more...

  18. Why Is This Model Reasonable? • Implementing the manual channel: • Compare two strings displayed by the devices 141 141

  19. Why Is This Model Reasonable? • Implementing the manual channel: • Compare two strings displayed by the devices • Type a string, displayed by one device, into the other device 141 141

  20. Why Is This Model Reasonable? • Implementing the manual channel: • Compare two strings displayed by the devices • Type a string, displayed by one device, into the other device • Visual hashing

  21. Why Is This Model Reasonable? • Implementing the manual channel: • Compare two strings displayed by the devices • Type a string, displayed by one device, into the other device • Visual hashing • Voice channel 141 141

  22. Alice Eve Bob ^ m m H(m) The Naive Solution m Alice Bob H(m) • H - collision resistant hash function (e.g., SHA-256) • No efficient algorithm can find m  m s.t. H(m) = H(m) with noticeable probability • Any adversary that forges a message can be used to find a collision for H ^ ^

  23. The Naive Solution m Alice Bob H(m) • H - collision resistant hash function (e.g., SHA-256) • No efficient algorithm can find m  m s.t. H(m) = H(m) with noticeable probability • Any adversary that forges a message can be used to find a collision for H ^ ^ Are we done? • No. The output length of SHA-256 is too long (160 bits) • Cannot be easily compared or typed by humans

  24. Previous Work • [Rivest & Shamir `84]: The “Interlock” protocol • Mutual authentication of public keys • No trusted infrastructure • AT&T, PGP,…, Zfone • [Vaudenay `05]: • Formal model • Computationally secure protocol for arbitrary long messages • log(1/)manually authenticated bits • [LAN `05, DDN `00]: Can be based on any one-way function (non-malleable commitments) • Efficient implementations: Forgery probability Optimal ! • Rely on a random oracle or • Assume a common reference string [DIO `98, DKOS `01]

  25. Previous Work • [Rivest & Shamir `84]: The “Interlock” protocol • Mutual authentication of public keys • No trusted infrastructure • AT&T, PGP,…, Zfone Computational Assumptions !! • [Vaudenay `05]: • Formal model • Computationally secure protocol for arbitrary long messages • log(1/)manually authenticated bits • [LAN `05, DDN `00]: Can be based on any one-way function (non-malleable commitments) • Efficient implementations: Forgery probability Optimal ! Are those really necessary? • Rely on a random oracle or • Assume a common reference string [DIO `98, DKOS `01]

  26. Our Results - Tight Bounds m n-bit . . . s ℓ-bit  forgery probability No setup or computational assumptions Only twice as many as [V05] • Upper bound:Constructed log*n-round protocol in which ℓ = 2log(1/) + O(1) • Matching lower bound: n  2log(1/)  ℓ  2log(1/) - 2 • One-way functions are necessary (and sufficient) for breaking the lower bound in the computational setting

  27. Unconditional Security Some advantages over computational security: • Security against unbounded adversaries • Exact evaluation of error probabilities • Protocols are often • easier to compose • more efficient Key agreement protocols

  28. Our Results - Tight Bounds ℓ ℓ = 2log(1/) ℓ = log(1/) One-way functions Unconditional security Computational security Impossible log(1/)

  29. Outline • Security definition • Our results • The protocol • Lower bound • One-way functions are necessary for breaking the lower bound • Conclusions

  30. Security Definition m n-bit . . . s ℓ-bit Unconditionally secure(n, ℓ, k, )-authentication protocol: • n-bit input message • ℓ manually authenticated bits • k rounds Completeness: No interference m Bob accepts m (with high probability) ^ Unforgeability: mPr[ Bob accepts m  m ]

  31. Outline • Security definition • Our results • The protocol • Lower bound • One-way functions are necessary for breaking the lower bound • Conclusions

  32. Preliminaries: For m = m1 ... mk GF[Q]k and x GF[Q], let m(x) = mixi k  i = 1 The Protocol (simplified) • Based on the [GN93] hashing technique • In each round, the parties: • Cooperatively choose a hash function • Reduce to authenticating a shorter message • A short message is manually authenticated ^ Then, for any m ≠ m and for any c, c  GF[Q], ^ ^ ^ Prob x RGF[Q] [ m(x) + c = m(x) + c ]  k/Q

  33. Preliminaries: For m = m1 ... mk GF[Q]k and x GF[Q], let m(x) = mixi k  i = 1 ^ Then, for any m ≠ m and for any c, c  GF[Q], ^ ^ ^ Prob x RGF[Q] [ m(x) + c = m(x) + c ]  k/Q The Protocol (simplified) x || m(x) + c We hash m to One party chooses x Other party chooses c

  34. The Protocol (simplified) Alice Bob m a1 a1R GF[Q1] b1R GF[Q1] b2 b1 a2R GF[Q2] b2R GF[Q2] m2 Accept iff m2 is consistent m0 = m Both parties set: Q1 n/ , Q2 log(n)/ m1 = b1 || m0(b1) + a1 m2 = a2 || m1(a2) + b2 2log(1/) + 2loglog(n) + O(1)manually authenticated bits Two GF[Q2]elements • k rounds 2loglog(n) is reduced to 2log(k-1)(n)

  35. Security Analysis • Must consider all generic man-in-the-middle attacks. • Three attacks in our case: Attack #1 Alice Eve Bob ^ ^ m a1 m a1 ^ ^ b2 b2 b1 b1 m2

  36. Security Analysis • Must consider all generic man-in-the-middle attacks. • Three attacks in our case: Attack #2 Alice Eve Bob ^ ^ m a1 b2 b1 m a1 ^ ^ b2 b1 m2

  37. Security Analysis • Must consider all generic man-in-the-middle attacks. • Three attacks in our case: Attack #3 Alice Eve Bob m a1 ^ ^ b2 b1 m2 ^ ^ m a1 b2 b1 m2

  38. Security Analysis – Attack #1 Alice Eve Bob ^ ^ m a1 m a1 ^ ^ b2 b2 b1 b1 m2 ^ m0,A = m m0,B = m ^ ^ ^ m1,A = b1 || m0,A(b1) + a1 m1,B = b1 || m0,B(b1) + a1 ^ m2,A = a2 || m1,A(a2) + b2 m2,B = a2 || m1,B(a2) + b2 m0,A m0,B and m2,A = m2,B Pr[ m1,A = m1,B ] + Pr[ m1,A m1,B and m2,A = m2,B ] /2 + /2

  39. Pr[ m1,A = m1,B ] Security Analysis – Attack #1 Alice Eve Bob ^ ^ m a1 m a1 ^ b1 b1 ^ m0,A = m m0,B = m ^ ^ ^ m1,A = b1 || m0,A(b1) + a1 m1,B = b1 || m0,B(b1) + a1 Claim: ^ • Eve chooses b1 b1 • Eve chooses b1 = b1 m1,A m1,B ^  /2 ^ Pr[ m0,A(b1) + a1 = m0,B(b1) + a1 ]  /2

  40. Outline • Manual channel model • Our results • The protocol • Lower bound • One-way functions are necessary for breaking the lower bound • Conclusions

  41. Lower Bound Alice Bob m, x1 x2 s • mR {0,1}n M, X1, X2, S are well defined random variables

  42. Lower Bound Alice Bob M, X1 X2 S • Goal: H(S)  2log(1/) Basic Information Theory: • Shannon entropy • Conditional entropy • Mutual information • Cond. mutual information H(X) = - x p(x) logp(x) H(X | Y) = Expy H(X | Y=y) I(X ; Y) = H(X) - H(X | Y) I(X ; Y | Z) = H(X | Z) - H(X | Y,Z)

  43. Lower Bound Alice Bob M, X1 X2 S • Goal: H(S)  2log(1/) Evolving intuition: • The parties must use at least log(1/) random bits • Each party must use at least log(1/) random bits • Each party must independently reduce H(S) by log(1/) bits H(S) = H(S) - H(S | M, X1) = I(S ; M, X1) + H(S | M, X1) - H(S | M, X1, X2) + I(S ; X2 | M, X1) + H(S | M, X1, X2) + H(S | M, X1, X2)

  44. = I(S ; M, X1) + I(S ; X2 | M, X1) + H(S | M, X1, X2) Lower Bound Alice Bob M, X1 X2 S • Goal: H(S)  2log(1/) Evolving intuition: • The parties must use at least log(1/) random bits • Each party must use at least log(1/) random bits • Each party must independently reduce H(S) by log(1/) bits Alice’s randomness H(S) Bob’s randomness

  45. = I(S ; M, X1) + I(S ; X2 | M, X1) + H(S | M, X1, X2) Lower Bound Alice Bob M, X1 X2 S • Goal: H(S)  2log(1/) Lemma 1: I(S ; M, X1) + H(S | M, X1, X2)  log(1/) Lemma 2: I(S ; X2 | M, X1)  log(1/) Alice’s randomness H(S) Bob’s randomness

  46. ^ ^ m x1 Eve wants Alice to manually authenticate s ^ x2 ^ ^ ^ • Samples x2 from the distribution of X2 given m, x1 and s If Pr[ s | m, x1 ] = 0 Eve quits and hopes that s = s ^ ^ Proof of Lemma 1 Consider the following attack: Alice Eve Bob x2 m x1 s Eve acts as follows: ^ • Chooses m R {0,1}n • Chooses mR {0,1}n • Forwards s

  47. ^ ^ ^   Pr[ s = s and m ≠ m ]  Pr[ s = s ] - 2-n ^ 2  Pr[ s = s ] ^ Claim: Pr[ s = s ]  2 - { (S ; M, X1) + H(S | M, X1, X2) } Proof of Lemma 1 By the protocol requirements: Since n  log(1/), we get which implies (S ; M, X1) + H(S | M, X1, X2) log(1/) - 1

  48. = I(S ; M, X1) + I(S ; X2 | M, X1) + H(S | M, X1, X2) Lower Bound Alice Bob M, X1 X2 S • Goal: H(S)  2log(1/) - 2 Lemma 1: I(S ; M, X1) + H(S | M, X1, X2)  log(1/) - 1 Lemma 2: I(S ; X2 | M, X1)  log(1/) - 1 Alice’s randomness H(S) Bob’s randomness

  49. Outline • Manual channel model • Our results • The protocol • Lower bound • One-way functions are necessary for breaking the lower bound • Conclusions

  50. One-Way Functions Theorem: One-way functions are necessary for breaking the 2log(1/) lower bound in the computational setting No one-way functions The attacks of the lower bound can be carried out by a poly-time adversary

More Related