1 / 92

EEL 4930 §6 / 5930 §5, Spring ‘06 Physical Limits of Computing

http://www.eng.fsu.edu/~mpf. EEL 4930 §6 / 5930 §5, Spring ‘06 Physical Limits of Computing. Slides for a course taught by Michael P. Frank in the Department of Electrical & Computer Engineering. Course Introduction Moore’s Law vs. Modern Physics Foundations

baeddan
Download Presentation

EEL 4930 §6 / 5930 §5, Spring ‘06 Physical Limits of Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. http://www.eng.fsu.edu/~mpf EEL 4930 §6 / 5930 §5, Spring ‘06Physical Limits of Computing Slides for a course taught byMichael P. Frankin the Department of Electrical & Computer Engineering

  2. Course Introduction Moore’s Law vs. Modern Physics Foundations Required Background Material in Computing & Physics Fundamentals The Deep Relationships between Physics and Computation IV. Core Principles The two Revolutionary Paradigms of Physical Computation V. Technologies Present and Future Physical Mechanisms for the Practical Realization of Information Processing VI. Conclusion Physical Limits of ComputingCourse Outline Currently I am working on writing up a set of course notes based on this outline,intended to someday evolve into a textbook M. Frank, "Physical Limits of Computing"

  3. Part III. FundamentalsThe Deep Relationships Between Physics and Computation • Contents of Part III: • Chapter A. Interpreting Physics in Terms of Computation • 1. Entropy as (Nondecomputable) Physical Information • 2. Action as Physical Computational Effort • 3. Energy as Physical Computing Performance • 4. Temperature as Physical Computing Frequency • 5. Momentum as Directed Computing Density • 6. Relations Between Gravity and Information Density • Chapter B. Fundamental Physical Limits on Computing • 1. Information Propagation Velocity Limits • 2. Information Storage Density Limits • 3. Communication Bandwidth Limits • 4. Computational Energy Efficiency Limits • 5. Computational Performance Limits M. Frank, "Physical Limits of Computing"

  4. Chapter III.A. Physics as Computation Reinterpreting Fundamental Physical Quantities in Computational Terms

  5. Physics as Computation • We will argue for making the following identities, among others: • Physical entropy is the part of the total information content of a physical system that cannot be reversibly decomputed (to a standard state). • This was already argued in the thermodynamics module. • Physical energy (of a given type) is essentially the rate at which (quantum) computational operations of a corresponding type are taking place (or at least, at which computational effort is being exerted) in a physical system. It is a measure of physical computing activity. • Rest mass is the rate of updating of internal state information in proper time. • Kinetic energy is closely related to the rate of updating of positional information. • Potential energy is the total rate at which operations that carry out the exchange of virtual particles implementing forces are taking place. • Heat is that part of the energy that is operating on information that is entropy. • (Generalized) Temperatureis the rate of operations per unit of information. • I.e., ops/bit/t is effectively just the computational “clock frequency!” • Thermodynamic temperature is the temperature of information that is entropy. • The physical action of an energy measures the total number of quantum operations performed (or amount of quantum computational effort exerted) by that energy. An action is thus an amount of computation. • The action of the Lagrangian is the difference between the amount of “kinetic computation” vs. “potential computation.” • (Relativistic) physical momentum measures spatial-translation (“motional”) ops performed per unit distance traversed. • This definition makes this entire system consistent with special relativity. M. Frank, "Physical Limits of Computing"

  6. Physics as Computing (1 of 2) M. Frank, "Physical Limits of Computing"

  7. Physics as Computing (2 of 2) M. Frank, "Physical Limits of Computing"

  8. On the Interpretation of Energy as the Rate of Quantum Computing Dr. Michael P. FrankDept. of Electrical & Computer Eng.FAMU-FSU College of Engineering Summary of paper published in Quantum Information Processing, Springer, 4(4):283-334, Oct. 2005. First presented in the talk “Physics as Computing,” at the Quantum Computation for Physical Modeling Workshop (QCPM ’04), Martha’s Vineyard, Wednesday, September 15, 2004

  9. Abstract • Studying the physical limits of computing encourages us to think about physics in computational terms. • Viewing physics “as a computer” directly gives us limits on the computing capabilities of any machine that’s embedded within our physical world. • We want to understand what various physical quantities mean in a computational context. • Some answers so far: • Entropy = Unknown/incompressible information • Action = Amount of computational “work” • Energy = Rate of computing activity • Generalized temperature = “Clock frequency” (activity per bit) • Momentum = “Motional” computation per unit distance Today’stopic M. Frank, "Physical Limits of Computing"

  10. Energy as Computing • Some history of the idea: • Earliest hints can be seen in the original Planck E=hν relation for light. • That is, an oscillation with a frequency of ν requires an energy at least hν. • Also suggestive is the energy-time uncertainty principle ∆E∆t ≥ /2. • Relates average energy uncertainty∆E to minimum time intervals ∆t. • Margolus & Levitin, Physica D 120:188-195 (1998). • Prove that a state of average energy E above the ground state takes at least time ∆t = h/4E to evolve to an orthogonal one. • Or (N-1)h/2NE, for a cycle of N mutually orthogonal states. • Lloyd, Nature 406:1047-1054, 31 Aug. 2000. • Uses that to calculate the maximum performance of a 1 kg “ultimate laptop.” • Levitin, Toffoli, & Walton, quant-ph/0210076. • Investigate minimum time to perform a CNOT + phase rotation, given E. • Giovannetti, Lloyd, Maccone, Phys. Rev. A 67, 052109 (2003), quant-ph/0210197; also see quant-ph/0303085. • Tighter limits on time to reduce fidelity to a given level, taking into account both E and ∆E, amount of entanglement, and number of interaction terms. • These kinds of results prompt us to ask: • Is there some valid sense in which we can say that energy is computing? • And if so, what is it, exactly? • We’ll see this also relates to action as computation. M. Frank, "Physical Limits of Computing"

  11. Energy as Rate of Phase Rotation • Consider any quantum system whatsoever. • And consider any eigenvector |E (with eigenvalue E) of its Hamiltonian H • i.e., the Hermitian operator H that is the quantum representation of the system’s Hamiltonian energy function. • Remember, the state |E is therefore also an eigenvector of the system’s unitary time-evolution operator U = eiHt/. • Specifically, w. the eigen-equation U|E = (eiEt/)|E. • Thus, in any consistent wavefunction Ψ for this system, |E’s amplitude at any time t is given by Ψ(|E, t) = m·eiEt/. • It is a complex number with some fixed norm |Ψ|=m, phase-rotating in the complex plane at the angular frequency ω = (E/) rad = (E/h) circ = f. • In fact, the entirety of any system’s quantum dynamics can be summed up by saying that each of its energy eigenstates |E is just sitting there, phase-rotating at frequency E/h. • Thus we can say, energy is nothing other than a rate of phase rotation. • And, Planck’s constant his just 1 cycle of phase rotation, is 1 radian. • But, what happens to states other than energy eigenstates? • We’ll see that average energy still gives the average rate of rotation of the state coefficients in any basis. • We can say, energy is the rate of rotation of state vectors in Hilbert space! M. Frank, "Physical Limits of Computing"

  12. A Simple Example: |g • Consider a constant Hamiltonian with energy eigenstates |g (ground) and |x (excited), with eigenvalues 0,E. • That is, H|g=0, H|x=E|x. • E.g., in a 2d Hilbert space, H = E(1+σz)/2. • Consider the initial state |ψ0 = (|g+|x)·2−1/2. • c|x phase-rotates at rate ω|x = E ∕ . • In time 2E/h, rotates by θ=π. • The area swept out by c|x(t) is: • a|x = ½π(|c|x|2) =π∕ 4. • This is just ½ of a circle withradius r|x = 2−1/2. • Meanwhile, c|g is stationary. • Sweeps out zero area. • Total area: a=π∕ 4. |x i a=π/4 θ=π 1 c|x c|g 0 r = 2−1/2 M. Frank, "Physical Limits of Computing"

  13. Let’s Look at Another Basis • Define a new basis |0, |1 with: |0=(|g+|x)·2−1/2, |1=(|g−|x)·2−1/2 • Use the same initial state |ψ0 = |0. • Note the final state is |1. • Coefficients c|0(t) and c|1(t)trace out the pathshown to the right… • Note that the total areain this new basis is still π/4! • Area of a circle of radius ½. • Hmm, is this true for any basis? Yes!… • The rest of the paper shows why, and implications. a = π/4 c|0 c|1 M. Frank, "Physical Limits of Computing"

  14. Action: Some Terminology • A physical action is, most generally, the integral of an energy over time. • Or, along some “temporalizable” path. • Typical units of action: h or . • Correspond to angles of 1 circle and 1 radian, respectively. • Normally, the word “action” is reserved to refer specifically to the action of the LagrangianL. • This is the action in Hamilton’s “least action” principle. • However, note that we can broaden our usage a bit and equally well speak of the “action of” any quantity that has units of energy. • E.g., the action of the Hamiltonian H = L + p·v = L + p2/m. • Warning: I will use the word “action” in this more generalized sense! M. Frank, "Physical Limits of Computing"

  15. Action as Computation • We will argue: Actionis computation. • That is, an amount of action corresponds exactly to an amount of physical quantum-computational “effort.” • Defined in an appropriate sense. • The type of action corresponds to the type of computational effort exerted, e.g., • Action of the Hamiltonian = “All” computational effort. • Action of the Lagrangian = “Internal” computational effort. • Action of energy pv = “Motional” computational effort • We will show exactly what we mean by all this, mathematically… M. Frank, "Physical Limits of Computing"

  16. Action of a Time-Dependent Hamiltonian • Definition. Given any time-dependent Hermitian operator (Hamiltonian trajectory) H(t) defined over any continuous range of times including one labeled t=0, the cumulative action trajectory of H(t) defined over that range is the unique continuously time-dependent Hermitian operator A(t) such that A(t)=0 and U(t) = eiA(t) satisfies the time-dependent operator form of the Schrödinger equation, namely: • If H(t) commutes with its time-derivative everywhere throughout the given range (but not in general if it doesn’t), this definition can be simplified to • If H(t) = H = const., the definition simplifies further, to just A(t) = Ht. (Using the opposite of the usual signconvention here.) M. Frank, "Physical Limits of Computing"

  17. Some Nice Identities for A • Consider applying an action operator A=A(t) itself to any “initial” state v0. • For any observable A, we’ll use shorthand like A[v0] = v0|A|v0. • It is easy to show that A[v0]is equal to all of the following: • In any basis {vj}, • The quantum-average total phase-angle accumulation α of the coefficients ci of v’s components inthat basis{vi}, weighted by their instantaneous component probabilities (expression (1) below). • Exactly twice the net areaa swept out in the complex plane (relative to the complex origin) by v’s coefficients cj. • The line integral, along v’s trajectory, of the magnitude of the imaginary part of the inner product v | v + dv between adjacent states (2). • Note that the value of A[v0] therefore depends only on the specific trajectory v(t) that is taken by v0 itself, • and not on any other properties of the complete Hamiltonian that was used to implement that trajectory! • For example, it doesn’t depend on the energies H[u] of other states u that are orthogonal to v. (2) (1) M. Frank, "Physical Limits of Computing"

  18. Area swept out in energy basis Imaginaryaxis ci(t+dt) ci(t) 0 Real axis • For a constant Hamiltonian, • By a coefficient ci ofan energy basisvector vi. • If ri=|ci|=1, the area swept out is ½ of the accumulatedphase angle. • For ri<1, note areais this times ri2. • Sum over i = ½ avg.phase angle accumulated= ½ action of Hamiltonian. ri M. Frank, "Physical Limits of Computing"

  19. In other bases… Imag. axis cj(t + dt) cj(t) dj drj 0 Real axis • Both the phase and magnitude of each coefficient will change, in general… - The area swept out is no longer just a corresponding fraction of a circular disc. * It’s not immediately obvious that the sum of the areas swept out by all the cj’s will still be the same in the new basis. - We’ll show that indeed it is. M. Frank, "Physical Limits of Computing"

  20. Basis-Independence of a cj′ daj cj rj′ dj rj 0 • Note that each cj(t) trajectory is just a sum of circular motions… • Namely, a linear superposition of the ci(t) motions… • Since each circular component motion is continuous and differentiable, so is their sum. • The trajectory is everywhere a smooth curve. • No sharp, undifferentiable corners. • Thus, in the limit of arbitrarily short time intervals, the path can always be treated as linear. • Area dajapproaches ½ the parallelogram area rj rj′ sin dθ = cj×cj′ • “Cross product” of complex numbers considered as vectors • Use a handy complex identity:a*b = a·b + i(a×b) • Implies that daj = ½ Im[cj* cj′] • So, da = ½ Im[v†v′]. • So da is basis-independent, since the inner product v†v′is! M. Frank, "Physical Limits of Computing"

  21. Proving the Identities in the Time-Dependent Case • Over infinitesimal intervals dt, we know thatdα = ωdt = 2da = Imψ|ψ′ = A′[ψ] = ψ|H|ψdt. • Thus, even if H(t) is not constant, all these quantities remain equal when integrated over an entire range of times from 0 to any arbitrary value of t. • But, is it still true that A(t)[ψ(0)] = α for large t? • Apparently yes, because the trajectory A(τ) from 0 to t can seemingly be continuously deformed into a linearized one A(τ)=Hτ for which we already know this identity holds, • while leaving the overall A(t) the same throughout this process. • Throughout this continuous deformation process, α mod 2π can’t change for any eigenstate of U(t)while leaving the overall U(t)=eiA(t) the same, and α also can’t change discontinuously by a multiple of 2π, so it can never become at all unequal to its value in the linearized case. M. Frank, "Physical Limits of Computing"

  22. Computational Effort of a Hamiltonian Applied to a System • Suppose we’re given a time-dependent Hamiltonian H(t), a specific initial state v, and a time interval (t0=0, t) • We can of course compute the operator A(t) from H. • We’ll call A[v] the computational effort exerted according to the specific action operator A (or “by” H acting from t0 to t) on the initial state v. • Later we will see some reasons why this identification makes sense. • For now, take it as a definition of what we mean by “computational effort” • If we are given only a set V of possible initial vectors, • The (maximum, minimum) work of A (or H from t0 to t) is (5) • If we had a prob. dist. over V (or equiv., a mixed state ρ), • we could instead discuss the expected work (6) of A acting on V; (5) (6) M. Frank, "Physical Limits of Computing"

  23. Computational Difficulty of Causing a Desired Change • If we are interested in taking v0 to v1, and we have a set  of available action operators A (implied, perhaps, by a set of available Hamiltonians H(t)) • we define the minimum effort or difficulty of getting from v0 to v1, (7) • Maximizing over  isn’t very meaningful, since it may often yield ∞. • And if we have a desired unitary transform U that we wish to perform on any of a set of vectors V, given a set  of available action operators, • Then we can define the minimum (over ) worst-case (over V) effort to perform U, or “worst-case difficulty of U” (8). • Similarly, we could discuss the best-case effort to do U. • or (if we have vector probabilities) the minimum (over ) expected (over V) effort to do U, or “expected worst-case difficulty of U”(9). (8) (7) (9) M. Frank, "Physical Limits of Computing"

  24. The Justification for All This… • Why do we insist on referring to these concepts as “computational effort” or “computational difficulty?” • One could imagine other possible terms, such as “amount of change,” “physical effort,” the original “action of the Hamiltonian” etc. • What is so gosh-darned “computational” about this concept? • Answer: We can use these concepts to quantify the size or difficulty of, say, quantum logic-gate operations. • And by extension, classical reversible operations embedded in quantum operations… • And by extension, classical irreversible Boolean ops, embedded within classical reversible gates with disposable ancillas… • As well as larger computations composed from such primitives. • The difficulty of a given computational op (considered as a unitary U) is given by its minimized effort over )… • We can meaningfully discuss an operation’s minimum, maximum, or expected effort over a given space of possible input states. M. Frank, "Physical Limits of Computing"

  25. But, you say, Hamiltonian energy is only defined up to an additive constant… • Still, the difficulty of a given U can be a well-defined (and non-negative) quantity, IF… • We adopt an appropriate and meaningful zero of energy! • One useful convention: • Define the least eigenvalue (ground state energy) of H to be 0. • This ensures that energies are always positive. • However, we might want to do something different than this in some cases… • E.g., if the ground-state energy varies, and it includes energy that had to be explicitly transferred in from another subsystem… • Another possible convention: • We could count total gravitating mass-energy… • Anyway, let’s agree, at least, to just always make sure that all energies are positive, OK? • Then the action is always positive, and we don’t have to worry about trying to make sense of a negative “amount of computational work.” M. Frank, "Physical Limits of Computing"

  26. Energy as Computing • Given that “Action is computation,” • That is, amount of computation, • where the suffix “-ation” denotes a noun, • i.e., the act itself, • What, now, is energy? • Answer: Energy is computing. • By which I mean, “rate of computing activity.” • The suffix “-ing” denotes a verb, • the (temporal) carrying out of an action… • This should be clear, since note thatH(t)[ψ(t)] = dA/dt[ψ(0)] = Imψ(t)|ψ(t+dt) = dα(t)/dt… • Thus, the instantaneous Hamiltonian energy of any given state is exactly the rate at which computational effort is being (or would be) exerted “on” (or “by,” if you prefer) that state. M. Frank, "Physical Limits of Computing"

  27. Applications of the Concept • How is all this useful? • It lets us calculate time/energy tradeoffs for performing operations of interest. • It can help us find (or define) lower bounds on the number of operations of a given type needed to carry out a desired computation. • It can tell us that a given implementation of some computation is optimal. M. Frank, "Physical Limits of Computing"

  28. Time/Energy Tradeoffs • Suppose you determine that the difficulty of a desired v1→v2 or U(V) (given the available actions ) is A. • For a multi-element state set V, this could be a minimum, maximum, or expected difficulty… • And, suppose the energy that is available to invest in the system in question is at most E. • This then tells you directly that the minimum/maximum/expected (resp.) time to perform the desired transformation will be t ≥ A/E. • To achieve equality might require varying the energy of the state over time, if the optimal available H(t) says to do so… • Conversely, suppose we wish to perform a transformation in at most time t. • This then immediately sets a scale-factor for the magnitude of the energy E that must be devoted to the system in carrying out the optimal Hamiltonian trajectory H(t); i.e.,E ≥ A/t. M. Frank, "Physical Limits of Computing"

  29. Single-Qubit Gate Scenario Introduce basics of quantum computing first? • Let’s first look at 2-state (1-qubit) systems. • Later we’ll consider larger systems. • Let U be any unitary operator in U2. • I.e., any arbitrary 1-qubit quantum logic gate. • Let the vector set V consist of the “sphere” of all unit vectors in the Hilbert space ℋ2. • Given this scenario, the minimum effort to do any U is always 0 (just let v be an eigenvector of U), and is therefore uninteresting. • Instead we’ll consider the maximum effort. • What about our space  of available action operators? • Suppose for now, for simplicity, that all time-dependent Hermitian operators on ℋ2 are available as Hamiltonians. • Really we only need the time-independent ones, however. • Thus,  consists of all (constant) Hermitian operators. M. Frank, "Physical Limits of Computing"

  30. The Bloch Sphere (Image courtesy Wikipedia) • Gives a convenient way to visualize the projective (phase-free) 2D Hilbert space as a sphere in ordinary 3D space. • An angle of θ in Hilbert space corresponds to an angle of 2θon the Bloch sphere. • Also reveals how real spin orientations of quantum particles in 3D space correspond to superpositions of |↑ (up) and |↓ (down) spins. • Relative magnitude of |↑ and |↓ “latitude” on sphere • Relative phase of |↑ and |↓ “longitude” on sphere M. Frank, "Physical Limits of Computing"

  31. Analysis of Maximum Effort • The worst-case difficulty to do U (in this scenario) arises from considering a “geodesic” trajectory in U2. • All the worst-case state vectors just follow the “most direct” path along the unit sphere in Hilbert space to get to their destinations. • Other vectors “go along for the ride” on the necessary rotation. • The optimal unitary trajectory U(t0,t) then amounts to a continuous rotation of the Bloch sphere around a certain axis in 3-space… • where the poles of the rotation axis are the eigenvectors of U. • Also, there’s a simultaneous (commuting) global phase-rotation. • If we also adopt the convention that the ground-state energy of H is defined to be 0, • Then the global phase-rotation factor goes away, • And we are left with a total effort A that turns out to be exactly equal to θ, • Where θ[0,π] is simply the (minimum) required angle of Bloch-sphere rotation to implement the given U. M. Frank, "Physical Limits of Computing"

  32. Some Special Cases • Pauli operators X,Y,Z (including X=NOT), as well as the Hadamard gate: • Bloch sphere rotation angle = π (rads) • Worst-case difficulty: h/2 • Square-root of NOT, also phase gate (square root of Z): • Rotation angle π/2, difficulty = h/4. • “π/8” gate (square root of phase gate): • Rotation angle π/4, difficulty = h/8. M. Frank, "Physical Limits of Computing"

  33. Fidelity and Infidelity • The fidelity (“fraction of similarity”) between pure states u,v is defined as F(u,v) = |u|v|. • So, F2 is the probability of conflating the two. • Define the infidelity between u,v as: • Thus, I2 = 1 − F2 is the probability that if state u is measured in a basis that includes v as a basis vector, it will project to a basis state other than v. • Infidelity is thus a distance metric between states… (wv) Any u lies atsome angle relative to valong a planein Hilbert space defined by 0, v and somevector w that’sorthogonal to v. w u = Fv+Iw I(u,v) = sin()  0 v F(u,v) = cos() M. Frank, "Physical Limits of Computing"

  34. Difficulty of Achieving Infidelity • Guess what, a Bloch-sphere rotation by an angle of θ gives a maximum (over V) infidelity of I+(θ) = sin(θ/2). • Meanwhile, the minimum fidelity is cos(θ/2)… • You’ll notice that F2+I2=1, as probabilities should. • Therefore, achieving an infidelity of I requires performing a U whose worst-case difficulty is at least A = 2·arcsin(I). • However, the specific initial states that actually achieve this infidelity under the optimal rotation are Bloch-sphere “equator” states • These are equal superpositions of high and low energy eigenstates; • They perform a quantum-average amount of computational work that is only half of the maximum effort. • Thus, the actual difficulty D required for a specific state to achieve infidelity of Iis only half of the worst-case difficulty of U, or D = A/2 = ·arcsin(I). • And so, a specific state that exerts an amount of computational effort F ≤ π/2 can achieve an infidelity of at most I = sin(F/), while maintaining a fidelity of at least F=cos(F/)… • a nice simple relation… Especially if we use units where =1… M. Frank, "Physical Limits of Computing"

  35. Multi-Qubit Gates • Some multi-qubit gates are easy to analyze… • E.g., “controlled-U” gates that perform a unitary U on one qubit only when all of the other qubits are “1” • If the space of Hamiltonians is truly totally unconstrained, then (it seems) the effort of these will match that of the corresponding 1-bit gates. • However, in reality we don’t have such fine-tailored Hamiltonians readily available. • A more thorough analysis would analyze the effort in terms of a Hamiltonian that’s expressible as a sum of realistically-available, 1- and 2-qubit controllable interaction terms. • We haven’t really thoroughly tried to do this yet… M. Frank, "Physical Limits of Computing"

  36. Conclusion • We can define a clear and stable measure of the “length” of any continuous state trajectory in Hilbert space. (Call it “computational effort.”) • It’s simply given by the action of the Hamiltonian. • It has a nice geometric interpretation in the complex plane. • From this, we can accordingly define the “size” (or “difficulty”) of any unitary transformation. • As the worst-case (or average-case) path length, minimized over the available Hamiltonians. • We can begin to quantify the difficulty of various quantum gates of interest… • From this, we can compute lower bounds on the time to implement them for states of given energy. M. Frank, "Physical Limits of Computing"

  37. Improvements to the Computational Interpretation of Energy Operation Angles, RMS Action, Relationships to Energy Uncertainty

  38. Motivation for this Section • The previous section taught us that the Hamiltonian action taken by a quantum system can be identified with the amount of computational “effort” exerted by that system. • This can be thought of as its “potential” computational work • However, as in daily life, our “effort” quantity is something that can easily be wasted… • Exerting effort may not accomplish any “actual” computational work. • Example: Energy eigenstates merely rotate their phase angles; they don’t achieve any amount of infidelity. • They never change to a distinguishable state. • They never compute a “result” that’s measurably different from their “input” • Thus our “effort” quantity may overestimate the actual “usefulness” of a given physical computation. • And, we’ll see shortly that it can sometimes even underestimate it! • Can we define effective computational work in some way that avoids including the “useless work” that is associated with such wasted effort? • That is the goal of the following slides. M. Frank, "Physical Limits of Computing"

  39. Operation Angle Between Two Similar States v′ δv • Consider any two normalized state vectors v, v′ that are very similar to each other • I.e., they differ by a small amount δv = v′−v → 0 • E.g. they could be “adjacent” points along some continuous state trajectory • Let’s define the operation angle between v and v′, written θ(v,v′), as:θ(v,v′):≡arcsin(Inf(v,v′)) = arccos(Fid(v,v′)) • As θ→0, (sin θ) → θ, so as δv → 0, the operation angle approaches the infidelity, θ →Inf(v,v′). • Theorem: As δv → 0, we also haveθ → || Imv|δvv + i δv ||. • Thus, we can get away without taking sines or cosines! • Goal: Figure out what relationship the operation angle has to the Hamiltonian operator H. v M. Frank, "Physical Limits of Computing"

  40. Relationship Between the Action and the Operation Angle Unit spherein n-d Hilbertspace (2n realdimensions) • Let the displacement δvbe generated by a short unitary U′ = eiHδt → 1+iHδt. • I.e., close to the identity. • Consider now the action of the operator δA=Hδt on v. • We know that δv = i·δA(v). • We can decompose δvinto ivδφ+iwδθ, where • δφis the average phase angle accumulation of v’s coefficients in the energy basis, • I.e. the increment in effort φ or Hamiltonian action. • δθ is the operation angle, or angle in a direction towards a state wthat’s orthogonal to v. • Based on this, we find that δθ = ∆H[v]δt. • We can also express δA(v)=H(v)δt as uδξ, where u is the unit vector H(v)/|H(v)| and δξ = |H(v)|δt =(H2[v])1/2 δt • Note that δξ is the increment in RMS Hamiltonian action. • Note that this is always greater than the increment in average Hamiltonian action δφ = A[v] = H[v]δt. • We also have the nice Pythagorean relationδφ2 + δθ2 = δξ2. • This results from the fact thatH[v]2 + ∆H[v]2 = H2[v]. • Is ξ perhaps a better measure of “computational effort” than the φ that we defined earlier? • After all, it upper-bounds boththe amount of phase rotation and the infidelity accumulation. • Globally, it’s conserved like H, since its eigenstates are the same. δv ivδφ iuδξ iwδθ w = ∆Hv/∆H[v] M. Frank, "Physical Limits of Computing"

  41. Mean, Deviation, Variance and Standard Deviation Operators • For any vector v, abbreviate its squared norm as v = v|v = |v|2. • For any operator A, we can define the mean value operator of A, written A, as the (nonlinear) operatorA :≡ λv.A[v]v = λv.v|A|vv • Note: Every vector is an eigenvector of A, with eigenvalue A[v]! (The average eigenvalue of A in the probability distribution over eigenvectors that is implied by state v.) • A is just an operator of multiplication by a scalar, but it’s a non-constant scalar that varies (nonlinearly) as a function of the operand v to which A is applied. • Note: Although A does not perform a linear transformation of a vector v that it’s applied to, the operator itself is still a linear function of the underlying operator A that it’s derived from. • Thus we can still add together different mean-value operators, and multiply them by scalars, etc. • We can then define the (nonlinear) deviation operator of A, written ∆A, as the operator∆A :≡ A − A. • Note: Although ∆A does not represent a linear transformation of the vector space (the expression ∆Av is not linear in v), ∆A itself does still functionally depend linearly on A. • Next, we define the variance operator of A, written ∆A2, as∆A2:≡λv.∆Avv = λv.|∆Av|2v • It’s the squared length of the deviation vector resulting from applying ∆A to v. • Finally, we define the standard deviation operator of A, written ∆A, as∆A :≡λv.(∆A2v)1/2v= λv.|∆Av|v • With these definitions, ∆Av is exactly the standard deviation of the eigenvalues of A according to the probability distribution over the eigenstates of A that’s encoded by v. where ai= ith eigenvalue of A, pi = |i|v|2 = probability of eigenstate i,a = A[v] = average eigenvalue of A. M. Frank, "Physical Limits of Computing"

  42. Some Useful Properties of the Mean, Deviation & Variance Operators • For any operator A, • Let the notation A(v) denote |Av|, the scalar length of Av. • Directly from the previous slide’s definitions, we have that ∆A(v) = |∆Av| • The standard deviation of A for any vector v is the length of the vector ∆Avthat is generated by applying the deviation operator ∆A to v. ∆A2(v) = |∆Av|2 • The variance of A for any vector v is the squared length of the vector ∆Av • For any Hermitian operator H, we have that H2(v) = |Hv|2 • The mean value operator of the square of H for any vector v is the square of the length of the vector Hvthat is generated by applying H to v. • Proof: (see notes) H2 = ∆H2 + H2 • The mean value operator of the square of H is the sum of the variance operator ofH and the square of the mean value operator of H. • Proof: (see notes) M. Frank, "Physical Limits of Computing"

  43. The Operation Angle Can be Greater than the Action! • Now, a natural question to ask is, • Is the increment in operation angle δθ always upper-bounded by the increment of Hamiltonian action δφ? • This would be natural if the Hamiltonian action truly represents the computational effort or maximum computational work. • It turns out that this is false! • Example: In a two-state system with energy eigenvalues 0 and 1, let the probability p of the high-energy state be ε<<1. • Then H = ε, while it turns out ∆H ≈ ε1/2 > ε. • In this situation, we have δθ/δφ ≈ ε−1/2, • This ratio becomes arbitrarily large as ε becomes smaller! • Thus, in general (in the limit of states with small ε) the Hamiltonian action does not bound the instantaneous rate of “infidelitization” (motion towards orthogonal states) at all! • However, note that in this example, with a time-independent Hamiltonian, the total infidelity never gets to 100%. • The state merely orbits around the low-energy pole of the Hilbert sphere. • What if we consider time-dependent Hamiltonians? M. Frank, "Physical Limits of Computing"

  44. The Action of a Time-Dependent Hamiltonian Doesn’t Limit the Number of Orthogonal Transitions! w • We saw that a typical state where ∆h >> h makes a small orbit around the ground state. • Thus never achieves orthogonality with its original state. • But now, consider “linearizing” these small orbits by rotating for small angles about a series of different poles, as shown as right. • Such a trajectory could be implemented by rapidly switching through a sequence of different Hamiltonians. • With proper selection of poles, we can see that overall, the trajectory can proceed in almost a straight line, from the initial state v towards any desired state wv. • Moreover, the rate at which infidelity is accumulated along this path is dθ/dt = ∆H, the rate at which Hamiltonian action is accumulated is dφ/dt = H, and ∆H > H everywhere along the path! • Integrating along the path, we see that the total effort φ does not actually limit the total operation angle θ! • Hamiltonian action does not limit the time to reach an orthogonal state for time-dependent Hamiltonians! v [Note: Need to work example through inmore detail to prove itworks more rigorously.] M. Frank, "Physical Limits of Computing"

  45. Virtual Energy, Virtual Effort • We can declare the “virtual” total energy of any state ψ as being its RMS energy, R(v) = |Hv| = (H2[v])1/2. • This gives the instantaneous rate at which RMS action ξ = ∫δξ = ∫R(v)δt gets accumulated as the state moves along its trajectory v(t). • So, define the virtual computational effort exerted to be the RMS action ξ! • And the total rate of exertion of virtual computational effort to be R. • Now, the virtual energy R is vectored (the vector is Hv) along the direction “towards” the state vector u = Hv/R. • Projecting this energy onto v’s own direction, v|R|u = v|H|v = H[v] gives the Hamiltonian energy or rate of phase rotation of v. • Rate of phasal computation, rate of accumulation of phasal action. • Projecting onto the orthogonal direction w = ∆Hv/∆H[v] gives w|R|u=v|∆H†H|v/∆H[v] = ∆H[v] (check math), the rate of infidelitization of v. • Rate of effective (i.e., infidelitizing) computation, rate of accumulation of effective action. • Computation that actually moves probability mass towards neighboring states! • The variance of H in state v is the rate at which probability mass is flowing away from v. M. Frank, "Physical Limits of Computing"

  46. Virtual Effort, Phasal Effort, Effective Work • Along any continuous state trajectory, • The virtual effort is: • The phasal effort is: • The effective workis: • They are related by: e.w. v.e. p.e. M. Frank, "Physical Limits of Computing"

  47. Next: Dealing with Locality • In real physical systems, arbitrary Hamiltonians that would take us directly between any two states are not available! • Instead, physical Hamiltonians are constrained to be local. • Composed by summing up terms for the interactions between neighboring subsystems. • This is due to fact that field-theory Lagrangians are given by integrating a Lagrangian density function over space • Or, integrating total Lagrangian action over spacetime • We would like to see how and whether our concepts such as • “effective computational work” (accumulated operation angle) θ • “amount of phasal computation” (accumulated Hamiltonian action) φ • “total computational effort” (accumulated RMS action) ξ can be applied to these more restricted kinds of situations. • Eventually, we’d also like to understand how the computational interpretation of energy in local physics relates to relativistic effects such as time dilation, mass expansion, etc. M. Frank, "Physical Limits of Computing"

  48. Some terminology we’ll need… • A transformation is any unitary U which can applied to the state of a quantum system. • A transformation is effective with respect to a given basis B if the basis states aren’t all eigenstates of U. • Thus, U rotates at least some of the states in B to other states that are not identical. • A local operation is a transformation applied only to a spatially compact, closed subsystem. • Keep in mind, any region could be considered closed under sufficiently short time intervals. • Thus the overall U of any local theory can be approached arbitrarily closely by compositions of local operations only. • A transformation trajectory is any decomposition of a larger transformation into a sequence of local operations. • Approximate, but exact in the limit as time-per-op → 0. M. Frank, "Physical Limits of Computing"

  49. Example: Discrete Schrödinger’s Equation Location graph • We can discretize space to an (undirected) graph of “neighboring” locations…. • Then, there is a discrete set of distinguishable states or configurations(in the position basis) consisting of a function from these locations to the number of quanta of each distinct type at that location: • For Fermions such as electrons: • For each distinct polarization state: 0 or 1 at each loc. • For Bosons such as photons: • For each distinct polarization state: 0 or more at each loc. • We can then derive an undirected graph of neighboring configurations. • Each link corresponds to the transfer of 1 quanta between two neighboring locations, • Or (optionally) the creation/annihilation of 1 quanta at a location. • The system’s Hamiltonian then takes the following form: • For each configuration c, there is a term Hc • Gives the “potential” energy (incl. total particle rest masses?) of that configuration. • For each link between two neighboring configurations c,d corresponding to the motion of quanta with rest mass m, include a term Hcd: • Comes from the kinetic energy term in Schrödinger’s eq. • For each link corresponding to particle creation/annihilation, include a term based on fundamental coupling constants • Corresponding to strengths of fundamental forces. 0100 1000 0001 0010 1-fermionconfigurationgraph c d M. Frank, "Physical Limits of Computing"

  50. Operation Angles of Short Unitaries • For any short, local operation U =eiHδt → 1+iHδt, • Where “short” here means close to the identity matrix, • Define U’s operation angleθU as the following:θU = maxv arcsin(Inf(v,Uv)) = maxv arccos(Fid(v,Uv)) • Where v ranges over all normalized (|v|=1) states of the local subsystem in question. • Or over a subset V of these that are considered “accessible.” • In other words, consider each possible “input” state v. • After transformation by U, it rotates to Uv. • The inner product with the original v is v†Uv. • The magnitude of the inner product (fidelity) is given by the cosine of the angle between the original (v) and final (Uv) vectors. • Just as with dot products between real-valued vectors. • We focus on the angle required to yield that fidelity. • Maximizing over the possible vs gives us a definition of the operation angle of Uthat is independent of the actual state v. • The minimum would not be useful because it is always 0. • Since the eigenstates of Udo not change in magnitude. M. Frank, "Physical Limits of Computing"

More Related