1 / 44

Trends in MPC for Information Assurance Prof. C. Pandu Rangan (IIT Madras)

Trends in MPC for Information Assurance Prof. C. Pandu Rangan (IIT Madras). Outline of the Talk Information Assurance Shannon’s Model PRMT and PSMT Secret Sharing and Variants Hashing Proxy Re-Encryption. Information Assurance (IA) : Definition.

aimee
Download Presentation

Trends in MPC for Information Assurance Prof. C. Pandu Rangan (IIT Madras)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Trends in MPC for Information Assurance Prof. C. Pandu Rangan (IIT Madras)

  2. Outline of the Talk • Information Assurance • Shannon’s Model • PRMT and PSMT • Secret Sharing and Variants • Hashing • Proxy Re-Encryption

  3. Information Assurance (IA) : Definition • IA is the practice of managing information- • related risks. • IA practitioners seek to protect the • Confidentiality, Integrity and Availability of • data and their delivery systems. • The above goals are relevant whether the data • are in storage, processing or transit and • whether threatened by malice or accident. IA in a NutShell : IA is the process of ensuring that the right people get the right information at the right time.

  4. Information Assurance (IA) : Definition Contd … • IA’s broader connotation also includes reliability • and emphasizes strategic risk management over • tools and tactics. • IA includes other corporate governance issues • such as privacy, compliance, audits, business • continuity and disaster recovery. IA is interdisciplinary and draws from multiple fields, including fraud examination, forensic science, military science, management science, systems engineering, security engineering, and criminology, in addition to computer science.

  5. Foundation Concept

  6. Adversary with Unbounded Computing Power

  7. Perfectly Reliable Message Transmission (PRMT) P2 P3 P1 P4 S R m F Pn P5 Communication Network Pk • n players form the vertices of a communication network. • The underlying network is connected and synchronous. • All links are assumed to be secure. • A senderS has to send a message m (element of finite field F) reliablyto a receiverR.

  8. Perfectly Reliable Message Transmission (PRMT) P2 P3 P1 Corrupt P4 S R m F Pn Corrupt P5 Pk Communication Network • n players form the vertices of a communication network. • The underlying network is connected and synchronous. • All links are assumed to be secure. • A senderS has to send a message m (element of finite field F) reliablyto a receiverR. • Some of the players may be corrupt.

  9. Perfectly Secure Message Transmission (PSMT) P2 P3 P1 Corrupt P4 S R m F Pn Corrupt P5 Pk Communication Network PSMT = PRMT + Adversary should get no information about m The above should hold even if adversary has unbounded computing power

  10. Modeling Corruption The behaviour of corrupt players is modeled using a centralized Adversary. The adversary is classified based on: Computational Power: Bounded (cryptographic) or Unbounded (information-theoretic) Extent of Corruption: Passive or Active (Byzantine) or Mixed P3 P4 P1 P2 P2 P5 P5 Pn Type of Corruption: Static or Adaptive or Mobile Pk Pk Corruption capacity: Threshold or Non-Threshold.

  11. Information-Theoretic (Shannon) Security (Strongest Notion of Security) Security = Privacy + Correctness Privacy: Given any adversarial model, the corrupt players should get no information about the message transmitted even if they collude. Correctness: The message transmitted by the sender should correctly reach the Receiver.

  12. Multiparty Computation • General framework for describing computation between parties who do not trust each other • Example: elections • N parties, each one has a “Yes” or “No” vote • Goal: determine whether the majority voted “Yes”, but no voter should learn how other people voted • Example: auctions • Each bidder makes an offer • Offer should be committing! (can’t change it later) • Goal: determine whose offer won without revealing losing offers

  13. More Examples of MPC • Example: database privacy • Evaluate a query on the database without revealing the query to the database owner • Evaluate a statistical query on the database without revealing the values of individual entries • Many variations

  14. A Couple of Observations • In all cases, we are dealing with distributed multi-party protocols • A protocol describes how parties are supposed to exchange messages on the network • All of these tasks can be easily computed by a trusted third party • The goal of secure multi-party computation is to achieve the same result without involving a trusted third party

  15. Secret Sharing Protocols [Shamir 79, Blakley 79] • Set of players P = {P1 , P2, … ,Pn}, dealer D (e.g., D = P1). • Two phases • Sharing phase • Reconstruction phase • Sharing Phase • D initially holds s and each player Pi finally holds some private information vi. • Reconstruction Phase • Each player Pi reveals his private information v’i on which a reconstruction function is applied to obtain s = Rec(v’1, v’2, …, v’n).

  16. Secret Sharing (cont’d) Sharing Phase … vn v1 v3 v2 Reconstruction Phase Less than t +1 players have no info’ about the secret Secret s Dealer

  17. Secret Sharing (cont’d) Sharing Phase vn v1 v3 v2  t +1 players can reconstruct the secret Secret s Secret s Dealer … Reconstruction Phase Players are assumed to give their shares honestly

  18. Verifiable Secret Sharing(VSS)[Chor, Goldwasser, Micali 85] • Extends secret sharing to the case of activecorruptions (corrupted players, incl. Dealer, may not follow the protocol) • Up to t corrupted players • Adaptive adversary • Reconstruction Phase • Each player Pi reveals (some of) his private • information v’i on which a reconstruction • function is applied to obtain • s’ = Rec(v’1, v’2, …, v’n).

  19. VSS Requirements • Privacy • If D is honest, adversary has no Shannon information about s during the Sharing phase. • Correctness • If D is honest, the reconstructed value s’ = s. • Commitment • After Sharing phase, s’ is uniquely determined.

  20. Game Theory in Cryptography • Work on distributed computing and on • cryptography has assumed agents are either • honest or dishonest • honest agents follow the protocol and dishonest • agents do all they can to subvert it • Game theory assumes all agents are rational • and try to maximize their utility • Both views make sense in different contexts, • but their combination is more appropriate to • practical situations

  21. Rational Secret Sharing • Players are assumed to be rational • Each player’s preferences are such that • getting the secret is better than not getting it • secondarily, the fewer of the other agents • that get it, the better • The problem is, no player wants to send his share ! Rational secret sharing has applications in highly competitive real world scenario, where players are modeled selfish.

  22. Rational Secret Sharing Contd Preferences and Payoffs : For any player pi , let w1,w2,w3,w4 be the payoffs obtained in the following scenarios w1 − pi gets the secret, others do not get the secret w2 − pi gets the secret, others get the secret w3 − pi does not get the secret, others do not get the secret w4 − pi does not get the secret, others get the secret The preferences of pi is specified by w1 > w2 > w3 > w4

  23. Rational Secret Sharing Contd Underlying Assumptions: At each step, a player receives all the messages that were sent to him by other players at the previous step The system is synchronous and message delivery takes fixed delay Communication is guaranteed At each step all the players send their shares simultaneously

  24. Rational Secret Sharing Contd STOC ’04 : Impossibility of deterministic mechanism for Rational Secret Sharing, by Halpern and Teague[1]. Proposed a randomized protocol for achieving the Rational Secret Sharing SCN ’06 : Gordon and Katz[3] improved the randomized protocol (interference of dealer is minimized) PODC ’06 : Abraham et. al[4] analyzed the Rational Secret Sharing in a setting where players form coalitions CRYPTO ’06 : Lysyanskaya and Traindopolus[5] analyzed the problem in the presence of few malicious players

  25. Rational Secret Sharing Contd • The Intuition: • Suppose the players repeatedly play the game (repeatedly share • the secret) • If a player does not cooperate by not sending his share in the • current game, then the other players do not send him their shares • in the further games (Grim Trigger Strategy) • Hence, every player because of the fear of not receiving any share • from other players in the further games, will cooperate in the • current game. • This punishment strategy acts as an incentive for a player to • cooperate in the current game.

  26. Adversary with Polynomial Time Computing Power

  27. Hashing • A hash algorithm is used to condense messages • of arbitrary length to a smaller, fixed length • message digest • Federal Information Processing Standard • (FIPS) 180-3, the Secure Hash Standard (SHS) • [FIPS 180-3], specifies five Approved hash • algorithms: • - SHA-1, SHA-224, SHA-256, SHA-384, • and SHA-512 • Secure hash algorithms are typically used • with other cryptographic algorithms

  28. Hashing Properties • Collision resistance: It is computationally infeasible to find two different inputs to • the hash function that have the same hash value • Preimage resistance: Given a randomly chosen • hash value, hash_value, it is computationally • infeasible to find an x so that • hash(x) = hash_value • This property is also called the one-wayness property • Second preimage resistance: It is • computationally infeasible to find a second • input that has the same hash value as any other • specified input

  29. Strength of a Hash Function • Work factor of an algorithm is the number of • hash operations performed in that algorithm • If the work factor is 2x , then the strength is defined to be x bits

  30. Strength of Approved Hash Functions

  31. Random Oracle Model and Hash Function • Provable Security

  32. Proxy Re-encryption • In 2005, the digital rights management (DRM) of Apple's iTunes was compromised partially due to the fact that an untrusted party (i.e., the client's resource) could obtain the plaintext during a naive decrypt-and-encrypt operation This flaw could be prevented by using a secure Proxy Re-encryption scheme.

  33. Proxy Re-encryption • The previous scenario can be generalized as below: Stores data encrypted with server secret Malicious user can get the decrypted data Server Decrypts & encrypts with user A’s public key User A decrypts

  34. Proxy Re-encryption No information about data is known Proxy re-encrypts to user A User A decrypts Server

  35. Proxy Re-encryption Allows a semi-trusted proxy to convert ciphertexts for an entity B (delegator) to a ciphertext for C (delegatee), such that: • The proxy cannot see the underlying plaintext. • The delegatee alone cannot decrypt the delegator’s ciphertext

  36. Applications • Organization e-mail forwarding • Secure distribution of authentic content • Digital right management

  37. Signcryption with Proxy Re-encryption: Scheme

  38. Signcryption with Proxy Re-encryption: Scheme (contd..)

  39. Signcryption with Proxy Re-encryption: Scheme (contd..)

  40. Signcryption with Proxy Re-encryption: Scheme (contd..)

  41. Signcryption with Proxy Re-encryption: Scheme (contd..)

  42. Signcryption with Proxy Re-encryption: Scheme (contd..)

  43. Reference • Matthew Green, Giuseppe Ateniese, Identity-Based Proxy Re-encryption, Applied Cryptography and Network Security, LNCS 4521, pp. 288-306, Springer-Verlag, 2007. • Giuseppe Ateniese, Kevin Fu, Matthew Green, and Susan Hohenberger, “Improved Proxy Reencryption Schemes with Applications to Secure Distributed Storage”. ACM TISSEC,9(1): pp 1-30,Feb 2006. • Toshihiko Matsuo, Proxy Re-encryption Systems for Identity-Based Encryption  , Pairing 2007, LNCS 4575, pp. 247-267 • Ran Canetti and Susan Hohenberger, “Chosen Ciphertext Secure Proxy Re-encryption.” CCS 2007: Proceedings of the 14th ACM conference on Computer and communications security, pp.185-194, 2007. Also available at eprint archive: http://eprint.iacr.org/2007/171.pdf • Tony Smith. DVD Jon: buy DRM-less Tracks from Apple iTunes, March 18, 2005. Available at http://www.theregister.co.uk/2005/03/18/itunes_pymusique.

More Related