410 likes | 524 Views
Universally Composable MPC using Tamper-Proof Hardware. Jonathan Katz. What to do??!. Overview I. Goal : construct protocols that are secure when run concurrently alongside arbitrary other protocols Specifically, within the UC framework [C01]
E N D
Universally Composable MPC using Tamper-Proof Hardware Jonathan Katz
What to do??! Overview I • Goal: construct protocols that are secure when run concurrently alongside arbitrary other protocols • Specifically, within the UC framework [C01] • Unfortunately…this is impossible in the “plain” network model[CF01]
Overview II • One idea: introduce setup assumptions • Previous suggestions seem to require some trusted parties (think: CRS) • Is this inherent? • This talk: physical assumptions as a possible alternative to trusted setup • Specifically, the existence of tamper-proof hardware
Outline of the talk • MPC, UC framework, impossibility results, existing setup assumptions, … • Physical assumptions (e.g., tamper-proof hardware) as a new direction • Potential advantages • Quick intuition as to why tamper-proof hardware helps • UC-MPC from tamper-proof hardware
Secure multi-party computation • Parties P1, …, Pn holding x1, …, xn • Want to compute f = (f1(x1, …, xn), …, fn(x1, …, xn)), while maintaining • Privacy • Correctness • Independence of inputs • … • Formalize using simulation paradigm
Settings • Stand-alone setting • Major results from mid-80s establish that secure computation of any function is possible [Y82,GMW87,RB89,…] • Concurrent setting • Situation is less clear… • Focus of much recent research • Let’s see the problem
(prover) transcript x Lwitness w (simulator) com c c’ open x L Zero knowledge, stand-alone (verifier)
open com com c c c c’ c’ open com Zero knowledge, concurrent Exponential blowup!
UC framework [C01] Handling concurrent executions • For ZK, security in a concurrent setting is possible [RK99,KP01,PRS02] • What about other functions? • How to manage the complexity of this setting, in general?
Simplified overview Interactive distinguisher (aka “environment”) (real world) (ideal world)
(Simplified) overview • A key feature of the model is that the environment cannot be rewound • A protocol proven secure in the UC framework remains secure under general concurrent composition • Analyze a protocol in isolation; conclude that it is secure in a concurrent setting
Impossibility results • In the “plain model” with no honest majority, UC-computation of any “interesting” function is impossible [CF01,CKL03] • In some sense, this is an inherent limitation of concurrency • I.e., not just an artifact of the UC framework [L03,L04]
“Setup assumptions” • That is, augment the plain model • E.g., a common reference/random string (CRS) available to all parties • This idea has a long history [BFM88], and was the first setup assumption proposed in the UC setting [CF01] • Most commonly-used setup assumption • All feasibility results recovered [CLOS02]
How to generate a CRS? • Rely on a trusted party to generate it? • What if we are unwilling to assume any trusted parties? • Anyone who generates the CRS can potentially “attack” any protocol. • Use naturally-occurring random sources? • Unclear… (need correct distribution, synchronization, “privacy”[CDPW07])
Other setup assumptions? • Public-key registration services [BCNP04,CDPW07]: • Very strong requirements • E.g., must check all parties’ secret keys to make sure keys are well-formed • Again, requires a high degree of trust
Other setup assumptions? • “Signature cards” [HM-QU05]: • Government issued cards carrying out a specific functionality • Again, requires trust
An alternative? • Existing setup assumptions all appear to require some set of trusted parties • Perhaps physical assumptions can be used to circumvent existing impossibility results? • This might potentially lead to an approach that entirely avoids the need for trust!
Physical assumptions?! • Not as crazy as it might (at first) sound: • IT bounds on secret communication can be circumvented if noisy channels are assumed [Wyner75, CK78, Maurer93] • Quantum key exchange [BB84] • Broadcast with dishonest majority using physical multicast channels [FM00] • Network timing [DNS98, KLP05]
This talk: tamper-proof hardware • Assume the existence of tamper-proof hardware tokens • Users can create tokens implementing any functionality • No guarantee for tokens created by dishonest users • Dishonest users can do no more than observe input/output of honest tokens • Given the above, what can be done?
(Possible) advantages? • Elimination of trust • Users can produce their own tokens… • Reduction or distribution of trust • Each user can choose their own hardware vendor (not possible with other setup assumptions) • Accountability/falsifiability • Demonstrate hardware not tamper proof
Two (philosophical) caveats • Is the assumption reasonable? • Perhaps not today, perhaps someday… • Weaker assumptions (tamper-evident hardware?) may suffice… • Has the assumption been modeled appropriately? • You tell me • This is always a concern…
Inspiration from prior work • Using tamper-proof hardware to obtain more efficient/more secure protocols is not new • Inspired by the work on “observers” in the context of e-cash [CP, Brands, CP] • This is the first time it has been suggested for concurrent security or general MPC
com com Dependency eliminated! x x x open com Concurrent zero knowledge (verifier) Problem: dependence of the different executions
Details… • Functionality of token: • Run ZK protocol; sign statement being proven plus a bit indicating acceptance • Protocol (for prover): • Obtain token; run ZK protocol with the token; upon completion, send the final output of the token to the verifier
Security analysis (informal) • When the verifier is honest • Soundness of ZK proof system + security of signature scheme imply soundness • When the prover is honest • Zero knowledge of each proof system (in stand-alone sense) means that the view of each token can be simulated • Does not matter if the token runs incorrect protocol, or if there is a covert channel
Note on the proof… • The token can be rewound • (Intuition:) rewinding token ok since it is “isolated” from the network • This will follow from the formal model • Parties still cannot be rewound • In particular, MPC not trivial to achieve • (Think ZK proofs of knowledge…)
Modeling tamper-proof hardware(simplified) • On input (create, P, P’, M) from P (where M is an interactive Turing machine) do: • Send (create, P, P’) to P’ • Store (P, P’, M) • On input (run, P, msg) from P’, do: • As expected… • Chose random coins for M and run an execution of M with incoming message msg • (Maintain state) Fwrap
Observation • Implicit in Fwrap are the following assumptions: • A party creating a token “knows” the code the token will run • An token is completely tamper-proof, and has access to a source of randomness (this can be relaxed to some extent with a PRG) • Token cannot communicate directly with external network
UC computation using Fwrap • Using results of [CLOS02], it suffices to realize the (multiple) commitment functionality • Notation: (g, h, G, H) • Is a Diffie-Hellman tuple if loggG = loghH • Is a random tuple otherwise • Com(g,h,G,H)(b) = (gxhy, GxHygb) • Perfectly hiding if (g,h,G,H) is a random tuple • Extractable if it is a DH tuple and loggG is known
The protocol I • S creates the following token: • Receive (g, h) • Choose random G1, H1 and commit to them using a perfectly-hiding scheme • Receive (G2, H2); set G=G1G2 and H=H1H2 • Decommit to G1, H1 and set tSR=(g,h,G,H) • Output a signature on tSR • R creates a token symmetrically, and the parties exchange tokens
The protocol II • S and R each interact with the token sent by the other party, and then exchange tSR/tRS and signatures • At the end of this step, both parties hold tSR and tRS • (or abort)
Proof intuition I • Say P honest and P’ malicious • What can we argue about tPP’? • Since rewinding of P’ is not allowed, there is no way for a simulator to “force” the value of tPP’ • Nevertheless, with all but negligible probability tPP’ will be a random tuple
Proof intuition II • (P honest and P’ malicious) • What about tP’P? • When P’ sends M to Fwrap, the simulator obtains it • Although we cannot rewind P’, we can rewind the extracted M (i.e., the token) • Can “force” tP’P to be a Diffie-Hellman tuple with known discrete logarithm • (Indistinguishable from a random tuple)
The protocol III • To commit to b, S does: • Commit to b using standard commitment C • Compute commitment Com to b using tSR • Send both commitments to R and give witness indistinguishable proof that either • Commitments are to same value • Or, tRS is a Diffie-Hellman tuple
The protocol IV • To decommit, S does: • Send b • Give witness indistinguishable proof that either • Commitments C and Com were to b • Or, tRS is a Diffie-Hellman tuple
Crucial that Com is perfectly hiding • (since impossible to “force” the value of tSR) Proof intuition III • Proof is now straightforward… • Say S is honest • tSR random tuple; tRS Diffie-Hellman tuple • Simulation: • Commit to garbage; give WI proof using tRS • Send correct bit b; give WI proof using tRS
Proof intuition IV • Say R is honest • tRS random tuple; tSR Diffie-Hellman tuple • S sends C and Com + WI proof • Since tRS random tuple, this means that C and Com are commitments to the same value • Extract from Com using known discrete logarithm of tSR • In decommitment phase, WI proof can only be given successfully for the same bit
Conclusions and future directions • UC multi-party computation is impossible without extending the “plain model” • A natural goal is to find extensions that are both useful and realistic • Here, we suggest physical assumptions and tamper resistance in particular • Future work • General assumptions, more efficient protocols • Weaker models of tamper resistance • Other setup assumptions? Characterization?