180 likes | 198 Views
Explore the concepts of BosonSampling, Shor’s Theorem, Extended Church-Turing Thesis, and their implications for quantum computing. Discover how quantum optics and photon sampling can address #P-complete problems in a classical setting. Dive into the theoretical foundations and experimental applications of BosonSampling with insights from Valiant, Terhal-DiVincenzo, and other researchers. Uncover the challenges and possibilities of simulating quantum systems using classical algorithms.
E N D
BosonSampling Scott Aaronson (MIT) Based on joint work with Alex Arkhipov November 20, 2014 arXiv:1011.3245
The Extended Church-Turing Thesis (ECT) Everything feasibly computable in the physical world is feasibly computable by a (probabilistic) Turing machine Shor’s Theorem:Quantum Simulation has no efficient classical algorithm, unless Factoring does also
So the ECT is false … what more evidence could anyone want? • Building a QC able to factor large numbers is damn hard! After 20 years, no fundamental obstacle has been found, but who knows? • Can’t we “meet the physicists halfway,” and show computational hardness for quantum systems closer to what they actually work with now? • Factoring might be have a fast classical algorithm! At any rate, it’s an extremely “special” problem • Wouldn’t it be great to show that if, quantum computers can be simulated classically, then (say) P=NP?
BosonSampling (A.-Arkhipov 2011) Classical counterpart: Galton’s Board Replacing the balls by photons leads to famously counterintuitive phenomena, like the Hong-Ou-Mandel dip A rudimentary type of quantum computing, involving only non-interacting photons
n identical photons enter, one per input mode Assume for simplicity they all leave in different modes—there are possibilities The beamsplitter network defines a column-orthonormal matrix ACmn, such that In general, we consider a network of beamsplitters, with n input “modes” (locations) and m>>n output modes where nnsubmatrix of A corresponding to S is the matrix permanent
Example For Hong-Ou-Mandel experiment, In general, an nn complex permanent is a sum of n! terms, almost all of which cancel How hard is it to estimate the “tiny residue” left over? Answer (Valiant 1979):#P-complete (meaning: as hard as any combinatorial counting problem) Contrast with nonnegative permanents!
So, Can We Use Quantum Optics to Solve a #P-Complete Problem? That sounds way too good to be true… Explanation: If X is sub-unitary, then |Per(X)|2 will usually be exponentially small. So to get a reasonable estimate of |Per(X)|2 for a given X, we’d generally need to repeat the optical experiment exponentially many times
Better idea: Given ACmn as input, let BosonSampling be the problem of merely sampling from the same distribution DA that the beamsplitter network samples from—the one defined by Pr[S]=|Per(AS)|2 Theorem (A.-Arkhipov 2011): Suppose BosonSampling is solvable in classical polynomial time. Then P#P=BPPNP Upshot: Compared to (say) Shor’s factoring algorithm, we get different/stronger evidence that a weaker system can do something classically hard Better Theorem: Suppose we can sample DA even approximately in classical polynomial time. Then in BPPNP, it’s possible to estimate Per(X), with high probability over a Gaussian random matrix We conjecture that the above problem is already #P-complete. If it is, then even a fast classical algorithm for approximateBosonSampling would have the consequence thatP#P=BPPNP
Related Work Valiant 2001, Terhal-DiVincenzo 2002, “folklore”:A QC built of noninteracting fermions can be efficiently simulated by a classical computer Knill, Laflamme, Milburn 2001: Noninteracting bosons plus adaptive measurementsyield universal QC Jerrum-Sinclair-Vigoda 2001: Fast classical randomized algorithm to approximate Per(X) for nonnegative X Gurvits 2002: O(n2/2) classical randomized algorithm to approximate an n-photon amplitude to ± additive error (also, to compute k-mode marginal distribution in nO(k) time)
OK, so why is it hard to sample the distribution over photon numbers classically? Given any matrix XCnn, we can construct an mm unitary U (where m2n) as follows: Suppose we start with |I=|1,…,1,0,…,0 (one photon in each of the first n modes), apply U, and measure. Then the probability of observing |I again is
Claim 1: p is #P-complete to estimate (up to a constant factor) This follows from Valiant’s famous result. Claim 2: Suppose we had a fast classical algorithm for boson sampling. Then we could estimate p in BPPNP—that is, using a randomized algorithm with an oracle for NP-complete problems This follows from a classical result of Goldwasser-Sipser Conclusion: Suppose we had a fast classical algorithm for boson sampling. Then P#P=BPPNP.
Unfortunately, this argument hinged on the hardness of estimating a single, exponentially-small probability p. As such, it’s not robust to realistic experimental error. Showing that a noisy BosonSampling device still samples a classically-intractable distribution is a much more complicated problem. As mentioned, we can do it, but only under an additional assumption (that estimating Gaussian permanents is #P-complete) A first step toward proving that conjecture, would simply be to understand the distribution of |Per(X)|2 for Gaussian X. Is it (as we conjecture) approximately lognormal?
BosonSampling Experiments In 2012, groups in Brisbane, Oxford, Rome, and Vienna reported the first 3-photon BosonSampling experiments, confirming that the amplitudes were given by 3x3 permanents # of experiments > # of photons!
Obvious Challenges for Scaling Up: • Reliable single-photon sources (optical multiplexing?) • Minimizing losses • Getting high probability of n-photon coincidence • Goal (in our view): Scale to 10-30 photons • Don’t want to scale much beyond that—both because • you probably can’t without fault-tolerance, and • a classical computer probably couldn’t even verify the results!
Scattershot BosonSampling Exciting recent idea, proposed by Steve Kolthammer and others, for sampling a hard distribution even with highly unreliable (but heralded) photon sources, like SPDCs The idea: Say you have 100 sources, of which only 10 (on average) generate a photon. Then just detect which sources succeed, and use those to define your BosonSampling instance! Complexity analysis turns out to go through essentially without change
Polynomial-Time Verification of BosonSampling Devices? Idea 1: Let AS be the nnsubmatrix of A corresponding to output S. Let PSbe the product of squared 2-norms of AS’s rows. Check whether the observed distribution over PS is consistent with BosonSampling P under uniform distribution (a lognormal random variable) Idea 2: Let the scattering matrix U be a discrete Fourier transform. Then because of cancellations in the permanent, a ~1/n fraction of outcomes S should have probability 0. Check that these never occur. P under a BosonSampling distribution
Using Quantum Optics to Prove that the Permanent is #P-Complete[A., Proc. Roy. Soc. 2011] Valiant showed that the permanent is #P-complete—but his proof required strange, custom-made gadgets • We gave a new, arguably more transparent proof by combining three facts: • n-photon amplitudes correspond to nn permanents • (2) Postselected quantum optics can simulate universal quantum computation [Knill-Laflamme-Milburn 2001] • (3) Quantum computations can encode #P-complete quantities in their amplitudes
Open Problems Prove that Gaussian permanent approximation is #P-hard (first step: understand distribution of Gaussian permanents) Can the BosonSampling model solve classically-hard decision problems? With verifiable answers? Can one efficiently sample a distribution that can’t be efficiently distinguished from BosonSampling? Similar hardness results for other natural quantum systems (besides linear optics)?Bremner, Jozsa, Shepherd 2010: Another system for which exact classical simulation would collapse the polynomial hierarchy