160 likes | 250 Views
BosonSampling. Scott Aaronson (MIT) Talk at SITP, February 21, 2014. The Extended Church-Turing Thesis (ECT) Everything feasibly computable in the physical world is feasibly computable by a (probabilistic) Turing machine.
E N D
BosonSampling Scott Aaronson (MIT) Talk at SITP, February 21, 2014
The Extended Church-Turing Thesis (ECT) Everything feasibly computable in the physical world is feasibly computable by a (probabilistic) Turing machine Shor’s Theorem:Quantum Simulation has no efficient classical algorithm, unless Factoring does also
So the ECT is false … what more evidence could anyone want? • Building a QC able to factor large numbers is damn hard! After 16 years, no fundamental obstacle has been found, but who knows? • Can’t we “meet the physicists halfway,” and show computational hardness for quantum systems closer to what they actually work with now? • Factoring might be have a fast classical algorithm! At any rate, it’s an extremely “special” problem • Wouldn’t it be great to show that if, quantum computers can be simulated classically, then (say) P=NP?
BosonSampling (A.-Arkhipov 2011) Classical counterpart: Galton’s Board Replacing the balls by photons leads to famously counterintuitive phenomena, like the Hong-Ou-Mandel dip A rudimentary type of quantum computing, involving only non-interacting photons
n identical photons enter, one per input mode Assume for simplicity they all leave in different modes—there are possibilities The beamsplitter network defines a column-orthonormal matrix ACmn, such that In general, we consider a network of beamsplitters, with n input “modes” (locations) and m>>n output modes where nnsubmatrix of A corresponding to S is the matrix permanent For simplicity, I’m ignoring outputs with ≥2 photons per mode
Example For Hong-Ou-Mandel experiment, In general, an nn complex permanent is a sum of n! terms, almost all of which cancel How hard is it to estimate the “tiny residue” left over? Answer:#P-complete, even for constant-factor approx (Contrast with nonnegative permanents!)
So, Can We Use Quantum Optics to Solve a #P-Complete Problem? That sounds way too good to be true… Explanation: If X is sub-unitary, then |Per(X)|2 will usually be exponentially small. So to get a reasonable estimate of |Per(X)|2 for a given X, we’d generally need to repeat the optical experiment exponentially many times
Better idea: Given ACmn as input, let BosonSampling be the problem of merely sampling from the same distribution DA that the beamsplitter network samples from—the one defined by Pr[S]=|Per(AS)|2 Theorem (A.-Arkhipov 2011): Suppose BosonSampling is solvable in classical polynomial time. Then P#P=BPPNP Upshot: Compared to (say) Shor’s factoring algorithm, we get different/stronger evidence that a weaker system can do something classically hard Better Theorem: Suppose we can sample DA even approximately in classical polynomial time. Then in BPPNP, it’s possible to estimate Per(X), with high probability over a Gaussian random matrix We conjecture that the above problem is already #P-complete. If it is, then a fast classical algorithm for approximateBosonSampling would already have the consequence thatP#P=BPPNP
Related Work Valiant 2001, Terhal-DiVincenzo 2002, “folklore”:A QC built of noninteracting fermions can be efficiently simulated by a classical computer Knill, Laflamme, Milburn 2001: Noninteracting bosons plus adaptive measurementsyield universal QC Jerrum-Sinclair-Vigoda 2001: Fast classical randomized algorithm to approximate Per(A) for nonnegative A Gurvits 2002: Fast classical randomized algorithm to approximate n-photon amplitudes to ± additive error
BosonSampling Experiments Last year, groups in Brisbane, Oxford, Rome, and Vienna reported the first 3- and 4-photon BosonSampling experiments, confirming that the amplitudes were given by 3x3 and 4x4 permanents # of experiments ≥ # of photons!
Obvious Challenges for Scaling Up: • Reliable single-photon sources (optical multiplexing?) • Minimizing losses • Getting high probability of n-photon coincidence • Goal (in our view): Scale to 10-30 photons • Don’t want to scale much beyond that—both because • you probably can’t without fault-tolerance, and • a classical computer probably couldn’t even verify the results! Theoretical Challenge: Argue that, even with photon losses and messier initial states, you’re still solving a classically-intractable sampling problem
Scattershot BosonSampling Wonderful new idea, proposed by several experimental groups, for sampling a hard distribution even with highly unreliable (but heralded) photon sources, like Spontaneous Parametric Downconversion (SPDC) crystals The idea: Say you have 100 sources, of which only 10 (on average) generate a photon. Then just detect which sources succeeded, and use those to define your BosonSampling instance! Complexity analysis goes through essentially without change
Verifying BosonSampling Devices • As mentioned before, even verifying the output of a claimed BosonSampling device would presumably take exp(n) time, in general! • Recently underscored by [Gogolin et al. 2013] (alongside specious claims…) • Our responses: • Who cares? Take n=30 • If you do care, we can show how to distinguish the output of a BosonSampling device from all sorts of specific “null hypotheses”
Theorem (A. 2013): Let ACmn be Haar-random, where m>>n. Then there’s a classical polytimealgorithm C(A) that distinguishes the BosonSampling distribution DA from the uniform distribution U (whpover A, and using only O(1) samples) Strategy: Let AS be the nnsubmatrix of A corresponding to output S. Let P be the product of squared 2-norms of AS’s rows. If P>E[P], then guess S was drawn from DA; otherwise guess S was drawn from U Recent realization: You can also use the number of multi-photon collisions to distinguish DA from DA’, the same distribution but with classical distinguishable particles P under uniform distribution (a lognormal random variable) AS P under a BosonSampling distribution A
Turning the Logic Around[A., Proc. Roy. Soc. 2011] Arkhipov and I used the #P-completeness of the permanent—a great discovery of CS theory from the 1970s—to argue that bosonic sampling is hard for a classical computer Later, I realized that one can also go in the reverse direction! Using the power of postselected linear-optical quantum computing—shown by [Knill-Laflamme-Milburn 2001]—and the connection between LOQC and the permanent, I gave a new, arguably-simpler proof that the permanent is#P-complete
Open Problems Prove that approximating the permanent of an i.i.d. Gaussian matrix is #P-hard! Can our linear-optics model solve a classically-intractable problem for which a classical computer can efficiently verify the answer? Similar hardness results for other natural quantum systems (besides linear optics)? Bremner, Jozsa, Shepherd 2010: Another system for which exact classical simulation would collapse PH