1 / 37

Degradability and the quantum channel capacity

Degradability and the quantum channel capacity. Graeme Smith (IBM TJ Watson Research Center) March 21, 2010. N. Y. X. Noisy Channel. p(y|x). N. N. N. Y 1 (m). Y 2 (m). Y n (m). X 1 (m). X 2 (m). X n (m). Noisy Channel Coding. Encoder. Decoder. m. m 0. ¼ m. N.

waldo
Download Presentation

Degradability and the quantum channel capacity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Degradability and the quantum channel capacity Graeme Smith (IBM TJ Watson Research Center) March 21, 2010

  2. N Y X Noisy Channel p(y|x)

  3. N N N Y1(m) Y2(m) Yn(m) X1(m) X2(m) Xn(m) Noisy Channel Coding Encoder Decoder m m0 . . . ¼m

  4. N Y X Noisy Channel Capacity p(y|x) Capacity: bits per channel use in the limit of many channels

  5. N Y X Noisy Channel Capacity p(y|x) Capacity: bits per channel use in the limit of many channels C = maxX I(X;Y) I(X;Y) is the mutual information C is additive: C(W1£ W2) = C(W1)+C(W1)

  6. N N E D . . . N Quantum Capacity • Want to encode qubits so they can be recovered after experiencing noise. • Quantum capacity is the maximum rate, in qubits per channel use, at which this can be done. • We’d like a formula for Q(N ) interms of N .

  7. U 0 E B Á Quantum Capacity: What we know • Coherent Information: Q1 (N ) = maxS(B)-S(E) • Q(N ) ¸ Q1(N ) (Lloyd-Shor-Devetak) • Q(N ) = limn ! 1(1/n) Q1(N­ …­N ) • Q(N )  Q1(N ) (DiVincenzo-Shor-Smolin ‘98)

  8. Outline • Degradable channels: Definition and Examples • Strong Subadditivity and monotonicity of mutual information • Additivity of coherent information for degradable channels • Capacities of some channels • Relation to noisy storage cryptography

  9. Degradable Channels Devetak and Shor 03 U 0 E B A

  10. = A B Degradable Channels Trace out E U 0 E B A

  11. Degradable Channels U 0 E B A

  12. A E Degradable Channels “Complementary Channel” U 0 E = B A Trace out B

  13. Degradable Channels Bob can simulate Eve U 0 E B A

  14. Degradable Channels Bob can simulate Eve U 0 E B A degradable if is noisier than : For some “degrading channel”

  15. Degradable Channels Bob can simulate Eve U 0 E B A degradable if is noisier than : For some “degrading channel”

  16. A E0 Degradable Channels Bob can simulate Eve U 0 E = A E0 degradable if is noisier than : For some “degrading channel”

  17. Degradable Channels: Erasure Channel Orthogonal erasure flag • Erases with probability p:

  18. Degradable Channels: Erasure Channel Orthogonal erasure flag • Erases with probability p: • Envnment gets the state when its erased:

  19. Degradable Channels: Erasure Channel Orthogonal erasure flag • Erases with probability p: • Envnment gets the state when its erased: • As long as p · ½ , we can degrade: • just throws away  with prob

  20. Degradable Channels: Amplitude Damping • Acts like with and . Models relaxation from excited to ground state. • (Up to a unitary) • So, if · ½, can just damp more and simulate complementary channel

  21. Degradable Channels: A few more • Half of the qubit channels with two Kraus operators (the other half are reverse-degradable) • Pure loss bosonic gaussian channel • Channels whose complement is entanglement breaking (“Hadamard channels”)

  22. Monotonicity of Mutual Information and Strong Subadditivity • Mutual Information: I(A;B) = S(A) + S(B) – S(AB) A B C I(A;BC)

  23. Monotonicity of Mutual Information and Strong Subadditivity • Mutual Information: I(A;B) = S(A) + S(B) – S(AB) A B C I(A;BC) What happens if we trace out C?

  24. Monotonicity of Mutual Information and Strong Subadditivity • Mutual Information: I(A;B) = S(A) + S(B) – S(AB) • Monotonicity: I(A;BC) ¸ I(A;B) A B C I(A;B) What happens if we trace out C?

  25. Monotonicity of Mutual Information and Strong Subadditivity • Mutual Information: I(A;B) = S(A) + S(B) – S(AB) • Monotonicity: I(A;BC) ¸ I(A;B) • Equivalently: S(BC) + S(AB) ¸ S(ABC) + S(B) A B C I(A;B) What happens if we trace out C?

  26. Monotonicity of Mutual Information and Strong Subadditivity • Mutual Information: I(A;B) = S(A) + S(B) – S(AB) • Monotonicity: I(A;BC) ¸ I(A;B) • Equivalently: S(BC) + S(AB) ¸ S(ABC) + S(B) • This tells us mutual information can only decrease under local processing A B C I(A;B) What happens if we trace out C?

  27. U 0 E B Á Additivity of Coherent Information for degradable channels • The capacity is given by Q(N ) = limn ! 1(1/n) Q1(N­ …­N) • If we could show Q1(N­ N) · 2Q1(N) for any degradable channel, we’d have Q(N)= Q1(N) for degradable channels Recall that Q1 = max S(B) – S(E)  is a mixed state on the input.

  28. Additivity of Coherent Information for degradable channels • Say we have 12, a mixed state on A1A2, the input of N ­ N, with Q1(N ­ N) = S(B1B2)-S(E1E2) (entropies evaluated on B1B2E1E2 = U­ U 12 Uy­ Uy, where U is the isometric extension of N. • This gives us two input states to try on a single use of the channel: 1 = Tr212, and 2 = Tr112. Let’s evaluate how much coherent information the two of these give us. • 1 gives us S(B1)-S(E1), evaluated on B1E1 = U1Uy = TrB2E2B1B2E1E2 , and similarly 2 gives S(B2)-S(E2). Note that these entropies are evaluated on B1B2E1E2 ! • Since S(B1)-S(E1) · Q1(N), and similarly for 2, now we just have to show that S(B1B2) – S(E1E2) · S(B1)-S(E1) + S(B2)-S(E2). • This is equivalent to showing S(E1)+S(E2)–S(E1E2)= I(E1; E2) · I(B1;B2) = S(B1)+S(B2)-S(B1B2) • We can degrade B1 to E1, and B2 to E2, so this follows by monotonicity of mutual information!

  29. The Capacity of Some Channels • Erasure channel: Q 0.5 1 p

  30. The Capacity of Some Channels • Amplitude Damping Channel • Q = maxt (H(t) – H(t(1-))) 1 Q 0.5 1 

  31. Connections to noisy storage • Recall from Stephanie’s talk that for some special noise models, we can get secure OT as long as CN· ½. CN is the classical capacity of N and we get  uses of N per qubit sent in the protocol. • It’s pretty weird that the classical capacity shows up here, for a couple of reasons.

  32. Connections to noisy storage • First, probably the most natural attack of a dishonest Bob would be to use his noisy storage together with error correction to store some qubits. After Alice reveals the basis information, he can just measure them. At least for this attack, the quantum capacity is more relevant… • Plus, think about a completely dephasing channel: It certainly isn’t any help to a dishonest Bob, but it has CN = 1, which gives CN· ½. I don’t care how much classical memory you’ve got, it’s not going to help.

  33. Challenges • Probably the easiest memory model to understand is the erasure channel. It might even be relevant! The simplest attack from Bob is just to store the states Alice sends him and measures after she reveals. When p < ½, this seems to help… • If quantum capacity’s the right thing, it seems we’ll need a strong converse to the quantum capacity (i.e., fidelity vanishing exponentially with channel uses if we try to go above capacity) and we don’t have this, even for degradable channels.

  34. Feynman Reference “If you can’t get the 2’s right, you don’t know nothing!”

  35. Feynman Reference “If you can’t get the right, you don’t know nothing!” Dephasing and erasure channels

  36. Summary • Capacities tell us the ultimate limits for error correction over a noisy quantum channel. The quantum capacity tells us about quantum error correcting codes. • For some special channels---e.g., degradable ones---we can actually show that the regularized formula “single-letterizes”. The capacity has a simple formula. • Some pretty interesting channels are actually degradable---erasure, amplitude damping, pure loss gaussian bosonic. • The quantum capacity of a channel is related to a simple attack on noisy storage cryptography, but we don’t know if this attack is optimal. Analyzing erasure channel is probably a good step, but we’d still need a strong converse.

  37. How to check if a channel is degradable • Let’s say we have a channel with complementary channel . How do we know if it’s degradable? • Well, is a linear map on density matrices, so it has an inverse, .Generally, this is not a CPTP map. • If there’s a degrading map, , it’ll have to be equal to , so check if this is CPTP.

More Related