150 likes | 170 Views
Gentle Measurement of Quantum States and Differential Privacy. . Scott Aaronson ( UT Austin ) MIT, November 20, 2018. Joint work with Guy Rothblum (on the arXiv soon). Quantum Decoder Slide. Mixed State:. D D Hermitian matrix, positive semidefinite, Tr( )=1
E N D
Gentle Measurement of Quantum States and Differential Privacy Scott Aaronson (UT Austin) MIT, November 20, 2018 Joint work with Guy Rothblum (on the arXiv soon)
Quantum Decoder Slide Mixed State: DD Hermitian matrix, positive semidefinite, Tr()=1 Represents distribution where each superposition |i occurs with probability pi POVM Measurement: E1,…,Ek, summing to identity, which returns outcome i with probability Tr(Ei) if applied to Trace Distance: Quantum generalization of variation distance
Gentle Measurement Measurements in QM are famously destructive … but not always! Information/disturbance tradeoff: outcome predictable given knowledge of nobody needs to get hurt Given a quantum measurement M, let’s call M -gentle on a set of states S if one can implement it so that for every S and possible outcome y of M, Typical choice for S: Product states on n registers
Example: Measuring the total Hamming weight of n unentangled qubits could be extremely destructive But measuring the Hamming weight plus noise of order >>n is much safer! A good choice is Laplace noise:
Differential PrivacyA recent subfield of classical CS that wants to protect you Given an algorithm A that queries a database X=(x1,…,xn), we call A -DP if for every two databases X and X’ that differ in only a single xi, and every possible output y of A, Bad: How many of these patients have prostate cancer? Actually used now by Apple and Google… Better: Return the number of patients with prostate cancer, plus Laplace noise of average magnitude Why it’s 1/-DP:
Quantum Differential Privacy“Protecting the Privacy Rights of Quantum States” Given a quantum measurement M on n registers, let’s call M -DP on a set of states S if for every ,’S that differ by a channel acting on only 1 register, and every possible outcome y of M, Typical choice for S: Product states on n registers Example: Once again, measuring the total Hamming weight of n unentangled qubits plus a Laplace noise term Hmmmm….
Our DPGentleness Theorem (1) M is -gentle for small M is O()-DP (2) M is -DP on product states, and consists of a classical DP algorithm applied to the results of separate POVMs on each register M is O(n)-gentle on product states Notes: Both directions are asymptotically tight Restriction to product states is essential for part (2) (without it we only get O(n)-gentleness) Part (2) preserves efficiency, as long as the DP algorithm’s output distribution can be efficiently QSampled
Related Work Dwork et al. (2014) made a striking connection between differential privacy and adaptive data analysis Can we safely reuse the same dataset for many scientific studies? They showed that the answer is yes—if we’re careful to access the dataset using DP algorithms only! I.e., they connected DP to “classical Bayesian gentleness” Of course, damage to a distribution D is purely internal and mental, whereas damage to a state is often noticeable even by others measuring …
GentlenessDP The easy direction! Has nothing to do with product states or even quantum mechanics Lemma: If M is -gentle on all states, then it’s For consider the converse: if Pr[M outputs y] is very different on and , then conditioning on M outputting y will badly “damage” (+)/2, by Bayes’ Theorem! Now just apply the lemma separately to each register of our product state
DPGentleness for Product States The harder direction; known only for measurements that apply separate POVMs to the n registers First step: Let A be an -DP classical algorithm. Then for any product distribution D=D1Dn and output y of A, Next step: Given the “QSampled” version of D, |=|1|n, and any output y of A, Can then generalize to POVMs and mixed states
Separating Examples Measure the Hamming weight plus Laplace noise of magnitude ~n/10: 10/n-DP, yet not even 1/3-gentle on non-product states like Measure Hamming weight + Laplace noise of magnitude ~n: ~1/n-DP, yet not o(1)-gentle on product states Theorem: There exist measurements that are 1/exp(n)-DP (indeed, nearly trivial) on product states, yet far from DP on entangled states 0-DP (i.e., completely trivial) on product states does imply 0-DP on all states, but only because amplitudes are complex numbers!
Application: Shadow Tomography The Task (A. 2016): Let be an unknown D-dimensional mixed state. Let E1,…,EM be known 2-outcome POVMs. Estimate Pr[Ei accepts ] to within for all i[M]—the “shadows” that casts on E1,…,EM—with high probability, by measuring as few copies of as possible Clearly k=O(D2) copies suffice (do ordinary tomography) Clearly k=O(M) suffice (apply each Ei to separate copies) But what if we wanted to know, e.g., the behavior of an n-qubit state on all accept/reject circuits with n2 gates? Could we do
Theorem (A., STOC’2018): Shadow tomography is possible using only copies My protocol combined: • The multiplicative weights update method (i.e., start with the “maximally stupid hypothesis,” 0=I/D, and then repeatedly look for opportunities to update, via postselecting on Tr(Eit)Tr(Ei) for some i) • The “Quantum OR Bound” (A. 2006 Harrow et al. 2017), which repeatedly picks out an informative measurement from E1,…,EM in a gentle way
New Result: We can do shadow tomography using copies of , via a procedure that’s also online and gentle(and simpler than my previous one, and probably more amenable to experimental implementation) How it works: we take a known procedure from DP, Private Multiplicative Weights (Hardt-Rothblum 2010), which decides whether to update our current hypothesis on each query Ei using a threshold measurement with Laplace noise added We give a quantum analogue, QPMW Since each iteration of QPMW is DP (and applies separate POVMs to each register), it’s also gentle on product states, so we can safely apply all M of the iterations in sequence
Open Problems Prove a fully general DPgentleness theorem for product states? Or even near-trivialitygentleness for product states? In shadow tomography, does the number of copies of need to have any dependence on log D? Best lower bound we can show: k=(-2 min{D, log M}). But for gentle or online shadow tomography, can use known lower bounds from DP to show that k=((log D)) is needed Composition of quantum DP algorithms? Use quantum to say something new about classical DP?