1 / 13

Scott Aaronson ( University of Texas at Austin ) SQuInT, Albuquerque, NM, February 12, 2019

Gentle Measurement of Quantum States and Differential Privacy. Scott Aaronson ( University of Texas at Austin ) SQuInT, Albuquerque, NM, February 12, 2019. Joint work with Guy Rothblum (on the arXiv soon) To appear in STOC’2019. Gentle Measurement.

freddien
Download Presentation

Scott Aaronson ( University of Texas at Austin ) SQuInT, Albuquerque, NM, February 12, 2019

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Gentle Measurement of Quantum States and Differential Privacy Scott Aaronson (University of Texas at Austin) SQuInT, Albuquerque, NM, February 12, 2019 Joint work with Guy Rothblum (on the arXiv soon) To appear in STOC’2019

  2. Gentle Measurement Measurements in QM are famously destructive … but not always! Information/disturbance tradeoff: outcome predictable given knowledge of   nobody needs to get hurt Given a quantum measurement M, let’s call M -gentle on a set of states S if one can implement it so that for every S and possible outcome y of M, Typical choice for S: Product states on n registers

  3. Example: Measuring the total Hamming weight of n unentangled qubits could be extremely destructive But measuring the Hamming weight plus noise of order >>n is much safer! A good choice is Laplace noise:

  4. Differential PrivacyA recent subfield of classical CS that wants to protect you Given an algorithm A that queries a database X=(x1,…,xn), we call A -DP if for every two databases X and X’ that differ in only a single xi, and every possible output y of A, Bad: How many of these patients have prostate cancer? Actually used now by Apple and Google… Better: Return the number of patients with prostate cancer, plus Laplace noise of average magnitude  Why it’s 1/-DP:

  5. Quantum Differential Privacy“Protecting the Privacy Rights of Quantum States” Given a quantum measurement M on n registers, let’s call M -DP on a set of states S if for every ,’S that differ by a channel acting on only 1 register, and every possible outcome y of M, Typical choice for S: Product states on n registers Example: Once again, measuring the total Hamming weight of n unentangled qubits plus a Laplace noise term Hmmmm….

  6. Our DPGentleness Theorem (1) M is -gentle for small   M is O()-DP (2) M is -DP on product states, and consists of a classical DP algorithm applied to the results of separate POVMs on each register  M is O(n)-gentle on product states Notes: Both directions are asymptotically tight Restriction to product states is essential for part (2) (without it we only get O(n)-gentleness) Part (2) preserves efficiency, as long as the DP algorithm’s output distribution can be efficiently QSampled

  7. Related Work Dwork et al. (2014) made a striking connection between differential privacy and adaptive data analysis Can we safely reuse the same dataset for many scientific studies? They showed that the answer is yes—if we’re careful to access the dataset using DP algorithms only! I.e., they connected DP to “classical Bayesian gentleness” Of course, damage to a distribution D is purely internal and mental, whereas damage to a state  is often noticeable even by others measuring …

  8. GentlenessDP The easy direction! Has nothing to do with product states or even quantum mechanics Lemma: If M is -gentle on all states, then it’s For consider the converse: if Pr[M outputs y] is very different on  and , then conditioning on M outputting y will badly “damage” (+)/2, by Bayes’ Theorem! Now just apply the lemma separately to each register of our product state

  9. DPGentleness for Product States The harder direction; known only for measurements that apply separate POVMs to the n registers First step: Let A be an -DP classical algorithm. Then for any product distribution D=D1Dn and output y of A, Next step: Given the “QSampled” version of D, |=|1|n, and any output y of A, Can then generalize to POVMs and mixed states

  10. Application: Shadow Tomography The Task (A. 2016): Let  be an unknown D-dimensional mixed state. Let E1,…,EM be known 2-outcome POVMs. Estimate Pr[Ei accepts ] to within  for all i[M]—the “shadows” that  casts on E1,…,EM—with high probability, by measuring as few copies of  as possible Clearly k=O(D2) copies suffice (do ordinary tomography) Clearly k=O(M) suffice (apply each Ei to separate copies) But what if we wanted to know, e.g., the behavior of an n-qubit state on all accept/reject circuits with n2 gates? Could we do

  11. Theorem (A., STOC’2018): Shadow tomography is possible using only copies My protocol combined: • The multiplicative weights update method (i.e., start with the “maximally stupid hypothesis,” 0=I/D, and then repeatedly look for opportunities to update, via postselecting on Tr(Eit)Tr(Ei) for some i) • The “Quantum OR Bound” (A. 2006  Harrow et al. 2017), which repeatedly picks out an informative measurement from E1,…,EM in a gentle way

  12. New Result: We can do shadow tomography using copies of , via a procedure that’s also online and gentle(and simpler than my previous one, and probably more amenable to experimental implementation) How it works: we take a known procedure from DP, Private Multiplicative Weights (Hardt-Rothblum 2010), which decides whether to update our current hypothesis on each query Ei using a threshold measurement with Laplace noise added We give a quantum analogue, QPMW Since each iteration of QPMW is DP (and applies separate POVMs to each register), it’s also gentle on product states, so we can safely apply all M of the iterations in sequence

  13. Open Problems Prove a fully general DPgentleness theorem for product states? Or even near-trivialitygentleness for product states? In shadow tomography, does the number of copies of  need to have any dependence on log D? Best lower bound we can show: k=(-2 min{D, log M}). But for gentle or online shadow tomography, can use known lower bounds from DP to show that k=((log D)) is needed Composition of quantum DP algorithms? Use quantum to say something new about classical DP?

More Related