190 likes | 194 Views
A joint work by Scott Aaronson and Alex Arkhipov provides evidence that quantum mechanics cannot be efficiently simulated on classical computers, highlighting the potential capabilities of quantum computers. The research focuses on the hardness of quantum simulation and the collapse of the polynomial hierarchy. The findings suggest that linear optics circuits with single-photon inputs and nonadaptive multimode photon-detection measurements have capabilities beyond the entire polynomial hierarchy. The results also explore complexity classes and the implications for quantum computer simulation. This research has implications for the development of quantum computers and their potential applications.
E N D
New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson (MIT) Joint work with Alex Arkhipov
Computer Scientist / Physicist Nonaggression Pact You tolerate these complexity classes: P NP BPP BQP #P PH And I don’t inflict these on you: AM AWPP BQP/qpoly MA P/poly PSPACE QCMA QIP QMA SZK YQP
In 1994, something big happened in the foundations of computer science, whose meaning is still debated today… Why exactly was Shor’s algorithm important? Boosters: Because it means we’ll build QCs! Skeptics: Because it means we won’t build QCs! Me: Even for reasons having nothing to do with building QCs!
Shor’s algorithm was a hardness resultfor one of the central computational problems of modern science: Quantum Simulation Use of DoE supercomputers by area (from a talk by Alán Aspuru-Guzik) Shor’s Theorem: Quantum Simulation is not in probabilistic polynomial time, unless Factoring is also
Today: A new kind of hardness result for simulating quantum mechanics Advantages: Based on a more “generic” complexity assumption than the hardness of Factoring Gives evidence that QCs have capabilities outside the entire polynomial hierarchy Only involves linear optics! (With single-photon Fock state inputs, and nonadaptive multimode photon-detection measurements) Disadvantages: Applies to relational problems (problems with many possible valid outputs) or sampling problems, not to decision problems Harder to convince a skeptic that your QC is indeed solving the relevant hard problem Less relevant for the NSA
Before We Go Further, A Bestiary of Complexity Classes… P#P Counting Permanent BQP How complexity theorists say “such-and-such is damn unlikely”: “If such-and-such is true, then PH collapses to a finite level” PH xyz… NP BPP 3SAT Factoring P Example of a PH problem: “For all n-bit strings x, does there exist an n-bit string y such that for all n-bit strings z, (x,y,z) holds?” Just as they believe PNP, complexity theorists believe that PH is infinite So if you can show “such-and-such is true PH collapses to a finite level,” it’s damn good evidence that such-and-such is false
Our Results • Suppose the output distribution of any linear-optics circuit can be efficiently sampled classically (e.g., by Monte Carlo). Then the polynomial hierarchy collapses (indeed P#P=BPPNP). • Indeed, even if such a distribution can be sampled by a classical computer with an oracle for the polynomial hierarchy, still the polynomial hierarchy collapses. • Suppose the output distribution of any linear-optics circuit can even be approximately sampled efficiently classically. Then in BPPNP, one can nontrivially approximate the permanent of a matrix of independent N(0,1) Gaussian entries (with high probability over the choice of matrix). • “Permanent-of-Gaussians Conjecture” (PGC): The above problem is #P-complete (i.e., as hard as worst-case Permanent) If the PGC is true, then even a noisy linear-optics experiment can sample from a probability distribution that no classical computer can feasibly sample from, unless the polynomial hierarchy collapses
Related Work Knill, Laflamme, Milburn 2001: Linear optics with adaptive measurements yields universal QC Valiant 2002, Terhal-DiVincenzo 2002: Noninteracting fermions can be simulated in P A. 2004: Quantum computers with postselection on unlikely measurement outcomes can solve hard counting problems (PostBQP=PP) Shepherd, Bremner 2009: “Instantaneous quantum computing” can solve sampling problems that might be hard classically Bremner, Jozsa, Shepherd 2010: Efficient simulation of instantaneous quantum computing would collapse PH
Particle Physics In One Slide There are two basic types of particle in the universe… All I can say is, the bosons got the harder job BOSONS FERMIONS Their transition amplitudes are given respectively by…
Linear Optics for Dummies (or computer scientists) Computational basis states have the form |S=|s1,…,sm, where s1,…,sm are nonnegative integers such that s1+…+sm=n n = # of identical photons m = # of modes For us, m>n Starting from a fixed initial state—say, |I=|1,…,1,0,…0— you get to choose any mm mode-mixing unitary U U induces an unitary (U) on n-photon states, defined by Here US,T is an nn matrix obtained by taking si copies of the ith row of U and tj copies of the jth column, for all i,j Then you get to measure (U)|I in the computational basis
Upper Bounds on the Power of Linear Optics Theorem (Feynman 1982, Abrams-Lloyd 1996):Linear-optics computation can be simulated inBQP Proof Idea: Decompose the mm unitary U into a product of O(m2) elementary “linear-optics gates” (beamsplitters and phaseshifters), then simulate each gate using polylog(n) standard qubit gates Theorem (Bartlett-Sanders et al.):If the inputs are Gaussian states and the measurements are homodyne, then linear-optics computation can be simulated in P Theorem (Gurvits): There exist classical algorithms to approximate S|(U)|T to additive error in randomized poly(n,1/) time, and to compute the marginal distribution on photon numbers in k modes in nO(k) time
By contrast, exactly sampling the distribution over all n photons is extremely hard! Here’s why … Given any matrix ACnn, we can construct an mm mode-mixing unitary U (where m2n) as follows: Suppose we start with |I=|1,…,1,0,…,0 (one photon in each of the first n modes), apply (U), and measure. Then the probability of observing |I again is
Claim 1: p is #P-complete to estimate (up to a constant factor) Idea: Valiant proved that the Permanent is #P-complete. Can use known (classical) reductions to go from a multiplicative approximation of |Per(A)|2 to Per(A) itself. Claim 2: Suppose we had a fast classical algorithm for linear-optics sampling. Then we could estimate p in BPPNP Idea: Let M be our classical sampling algorithm, and let r be its randomness. Use approximate counting to estimate Conclusion: Suppose we had a fast classical algorithm for linear-optics sampling. Then P#P=BPPNP.
High-Level Idea Estimating a sum of exponentially many positive or negative numbers: #P-complete Estimating a sum of exponentially many nonnegative numbers: Still hard, but known to be in BPPNP PH If quantum mechanics could be efficiently simulated classically, then these two problems would become equivalent—thereby placing #P in PH, and collapsing PH So why aren’t we done? Because real quantum experiments are subject to noise Would an efficient classical algorithm that sampled from a noisy distribution still collapse the polynomial hierarchy?
Main Result: Take a system of n identical photons with m=O(n2) modes. Put each photon in a known mode, then apply a Haar-random mm unitary transformation U: Permanent-of-Gaussians Conjecture (PGC): This problem is #P-complete U Let D be the distribution that results from measuring the photons. Suppose there’s a fast classical algorithm that takes U as input, and samples any distribution even1/poly(n)-close to D in variation distance. Then in BPPNP, one can estimate the permanent of a matrix A of i.i.d. N(0,1) complex Gaussians, to additive error with high probability over A.
PGC Hardness of Linear-Optics Sampling • Idea:Given a Gaussian random matrix A, we’ll “smuggle” A into the unitary transition matrix U for m=O(n2) photons—in such a way that S|(U)|I=Per(A), for some basis state |S • Useful fact we rely on: given a Haar-random mm unitary matrix, an nn submatrix looks approximately Gaussian Then the classical sampler has “no way of knowing” which submatrix of U we care about—so even if it has 1/poly(n) error, with high probability it will return |S with probability |Per(A)|2 Then, just like before, we can use approximate counting to estimate Pr[|S]|Per(A)|2 in BPPNP, and thereby solve a #P-complete problem
Problem: Bosons like to pile on top of each other! Call a basis state S=(s1,…,sm) good if every si is 0 or 1 (i.e., there are no collisions between photons), and bad otherwise If bad basis states dominated, then our sampling algorithm might “work,” without ever having to solve a hard Permanent instance Furthermore, the “bosonic birthday paradox” is even worse than the classical one! rather than ½ as with classical particles Fortunately, we show that with n bosons and mkn2 modes, the probability of a collision is still at most (say) ½
Experimental Prospects • What would it take to implement the requisite experiment? • Reliable phase-shifters and beamsplitters, to implement an arbitrary unitary on m photon modes • Reliable single-photon sourcesFock states, not coherent states • Photodetector arrays that can reliably distinguish 0 vs. 1 photon • But crucially, no nonlinear optics or postselected measurements! Our Proposal: Concentrate on (say) n30 photons and m1000 modes, so that classical simulation is difficult but not impossible
Open Problems What are the exact resource requirements? E.g., can our experiment be done using a log(n)-depth linear-optics circuit? 140Fr Prove the Permanent of Gaussians Conjecture!Would imply that even approximate classical simulation of linear-optics circuits would collapsePH ? Are there other quantum systems for which approximate classical simulation would collapse PH? Do a linear-optics experiment that solves a classically-intractable sampling problem!