270 likes | 283 Views
Discover recent directions in computational complexity applied to quantum mechanics, quantum computing, and physical transformations. Explore topics such as quantum advantage, BosonSampling, and the intersection of complexity theory with physics.
E N D
Exploring the Limits of the Efficiently ComputableResearch Directions in Computational Complexity and Physics That I Find Exciting Scott Aaronson (MIT)Papers & slides at www.scottaaronson.com
Quantum Mechanics in One Slide Probability Theory: Quantum Mechanics: Linear transformations that conserve 1-norm of probability vectors:Stochastic matrices Linear transformations that conserve 2-norm of amplitude vectors:Unitary matrices
Quantum Computing A general entangled state of n qubits requires ~2n amplitudes to specify: Presents an obvious practical problem when using conventional computers to simulatequantum mechanics Feynman 1981: So then why not turn things around, and build computers that themselvesexploit superposition? Could such a machine get any advantage over a classical computer with a random number generator? If so, it would have to come from interference between amplitudes
Interesting BQP (Bounded-Error Quantum Polynomial-Time): The class of problems solvable efficiently by a quantum computer, defined by Bernstein and Vazirani in 1993 Shor 1994: Factoring integers is in BQP NP-complete NP Factoring BQP P
Examples of My Past Work Quantum lower bound for the collision problem [A. 2002] Quantum (+classical!) lower bound for local search [A. 2004] First direct product theorem for quantum search [A. 2004] PostBQP = PP[A. 2004] BQP vs. the polynomial hierarchy: black-box relation problems in BQP but not BPPPH[A. 2009] Publicly-verifiable quantum money [A.-Christiano 2012] BQP/qpolyQMA/poly, learnability of quantum states [A.-Drucker 2010, A. 2004, A. 2006] Algebrization [A.-Wigderson 2008]
This Talk: Three Recent Directions 1. Meeting Experimentalists Halfway Using complexity theory to find quantum advantage in systems of current experimental interest (e.g. linear-optical networks), which fall short of universal quantum computers 2. Computational Complexity and Black Holes An amazing role for complexity theory in the recent “firewall” debate and the AdS/CFT correspondence 3. Physical Universality When Turing-universality isn’t enough: the complexity and realizability of physical transformations
BosonSampling (A.-Arkhipov 2011) Classical counterpart: Galton’s Board Replacing the balls by photons leads to famously counterintuitive phenomena, like the Hong-Ou-Mandel dip A rudimentary type of quantum computing, involving only non-interacting photons
n identical photons enter, one per input mode Assume for simplicity they all leave in different modes—there are possibilities The beamsplitter network defines a column-orthonormal matrix ACmn, such that In general, we consider a network of beamsplitters, with n input “modes” (locations) and m>>n output modes where nnsubmatrix of A corresponding to S Amazing connection between permanents and physics, which even leads to a simpler proof of Valiant’s famous result that the permanent is #P-complete [A. 2011]
So, Can We Use Quantum Optics to Solve a #P-Complete Problem? That sounds way too good to be true… Explanation: If X is sub-unitary, then |Per(X)|2 will usually be exponentially small. So to get a reasonable estimate of |Per(X)|2 for a given X, we’d generally need to repeat the optical experiment exponentially many times
Better idea: Given ACmn as input, let BosonSampling be the problem of merely sampling from the same distribution DA that the beamsplitter network samples from—the one defined by Pr[S]=|Per(AS)|2 Theorem (A.-Arkhipov 2011): Suppose BosonSampling is solvable in classical polynomial time. Then P#P=BPPNP Upshot: Compared to (say) Shor’s factoring algorithm, we get different/stronger evidence that a weaker system can do something classically hard Better Theorem: Suppose we can sample DA even approximately in classical polynomial time. Then in BPPNP, it’s possible to estimate Per(X), with high probability over a Gaussian random matrix We conjecture that the above problem is already #P-complete. If it is, then a fast classical algorithm for approximateBosonSampling would already have the consequence thatP#P=BPPNP
Prove #P-completeness for natural average-case approximation problems (like permanents of Gaussians)—thereby yielding hardness for approximate BosonSampling As a first step, understand the distribution of Per(X), X Gaussian Early experimental implementations have been done (Rome, Brisbane, Vienna, Oxford)! But so far with just 3-4 photons. For scaling, will be crucial to understand the complexity of BosonSampling when a constant fraction of photons are lost Can the BosonSampling model solve hard “conventional” problems? How do we verify that a BosonSampling device is working correctly? [A.-Arkhipov 2014, A.-Nguyen 2014] BosonSampling with thermal states: fast classical algorithm to approximate Per(X) when X is positive semidefinite? Challenges
Most striking application so far of complexity to fundamental physics? Hawking 1970s: Black holes radiate The radiation seems thermal (uncorrelated with whatever fell in)—but if quantum mechanics is true, then it can’t be Susskind et al. 1990s: “Black-hole complementarity.” In string theory / quantum gravity, the Hawking radiation should just be a scrambled re-encoding of the same quantum states that are also inside the black hole
The Firewall Paradox [Almheiri et al. 2012] If the black hole interior is “built” out of the same qubits coming out as Hawking radiation, then why can’t we do something to those Hawking qubits (after waiting ~1070 years for enough to come out), then dive into the black hole, and see that we’ve completely destroyed the spacetime geometry in the interior? Entanglement among Hawking photons detected!
Harlow-Hayden 2013: Sure, there’s some unitary transformation that Alice could apply to the Hawking radiation, that would generate a “firewall” inside the event horizon. But how long would it take her to apply it? They showed: A natural formalization of Alice’s decoding task is QSZK-hard (QSZK = Quantum Statistical Zero Knowledge) My 2002 collision lower bound suggests that QSZKBQP. In that case, decoding would presumably take time exponential in the number of qubits of the black hole—so the black hole would’ve evaporated before Alice had even made a dent!
R = Faraway Hawking Radiation B = Just-Emitted Hawking Radiation H = Interior of “Old” Black Hole
The HH Decoding Problem Given a description of a quantum circuit C, such that Promised that, by acting only on R (the “Hawking radiation part”), it’s possible to distill an EPR pair between R and B Problem: Distill such an EPR pair, by applying a unitary transformation UR to the qubits in R
My strengthening: Harlow-Hayden decoding is as hard as inverting an arbitrary one-way function R: “old” Hawking photons / B: photons just coming out / H: still in black hole B is maximally entangled with the last qubit of R. But in order to see that B and R are even classically correlated, one would need to learn xs (a “hardcore bit” of f), and therefore invert f With realistic dynamics, the decoding task seems like it should only be “harder” than in this model case (though open how to formalize that) Is computational intractability the only “armor” protecting the geometry of spacetime inside the black hole?
Quantum Circuit Complexity and Wormholes[A.-Susskind, in progress] The AdS/CFT correspondence relates anti-deSitter quantum gravity in D spacetime dimensions to conformal field theories (without gravity) in D-1 dimensions But the mapping is extremely nonlocal! Susskind’s Conjecture: The quantum circuit complexity of a CFT state can encode information about the geometry of the dual AdS. Not clear if it’s true, but has survived some nontrivial tests It was recently found that an expanding wormhole, on the AdS side, maps to a collection of qubits on the CFT side that just seems to get more and more “complex”: Theorem: Suppose U implements (say) a computationally-universal cellular automaton. Then after t=exp(n) iterations, |t has superpolynomial quantum circuit complexity unless PSPACEPP/poly
Four Related Questions For every n-qubit unitary U, is there a Boolean function f such that U can be implemented in BQPf? Which n-qubit unitaries could we efficiently implement if P=PSPACE? Can every n-qubit unitary be implemented by a quantum circuit with poly(n) depth (but maybe exp(n) ancilla qubits)? Could we prove—unconditionally, with today’s technology—that exponentially many gates are needed to implement some n-qubit unitary U? Generalizations of the Natural Proofs barrier?
A Grand Challenge Can we classify all possible sets of quantum gates acting on qubits, in terms of which unitary transformations they approximately generate? “Quantum Computing’s Classification of Finite Simple Groups” Warmup: Classify all the possible Hamiltonians / Lie algebras. Even just on 1 and 2 qubits! A.-Bouland 2014: Every nontrivial two-mode beamsplitter is universal Baby case that already took lots of representation theory…
The Classical Case A.-Grier-Schaefer 2015: Classified all sets of reversible gates in terms of which reversible transformations F:{0,1}n{0,1}n they generate (assuming swaps and ancilla bits are free) CNOT Fredkin Toffoli
Cellular Automata Beyond Turing-Universality Schaeffer 2014: The first known “physically-universal” cellular automaton (able to implement any transformation in any bounded region, by suitably initializing the complement of that region)
The Coffee Automaton A., Carroll, Mohan, Ouellette, Werness 2015: Detailed study of the rise and fall of “complex organization,” in a reversible cellular automaton that models the thermodynamic mixing of cream into coffee We prove that, under coarse-graining, the Kolmogorov complexity of this image has a rising-falling pattern
Summary Quantum computing established a remarkable intellectual bridge between computer science and physics That’s always been why I’ve cared! Actual devices would be a bonus My research agenda: to see just how much weight this bridge can carry Rebuilding physics in the language of computation won’t be nearly as easy as Stephen Wolfram thought! Not only does it require engaging our actual understanding of physics (QM, QFT, AdS/CFT…); it requires hard mathematical work, often making new demands on theoretical computer science But sure, I think it’s ultimately possible