1 / 35

Raptor Codes

Amin Shokrollahi. EPFL. Raptor Codes. BEC(p 1 ). BEC(p 2 ). BEC(p 3 ). BEC(p 4 ). BEC(p 5 ). BEC(p 6 ). Communication on Multiple Unknown Channels. Example: Popular Download. Example: Peer-to-Peer. Example: Satellite. The erasure probabilities are unknown .

ilar
Download Presentation

Raptor Codes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Amin Shokrollahi EPFL Raptor Codes

  2. BEC(p1) BEC(p2) BEC(p3) BEC(p4) BEC(p5) BEC(p6) Communication on Multiple Unknown Channels

  3. Example: Popular Download

  4. Example: Peer-to-Peer

  5. Example: Satellite

  6. The erasure probabilities are unknown. Want to come arbitrarily close to capacity on each of the erasure channels, with minimum amount of feedback. Traditional codes don’t work in this setting since their rate is fixed. Need codes that can adapt automatically to the erasure rate of the channel.

  7. Fountain Codes Sender sends a potentially limitless stream of encoded bits. Receivers collect bits until they are reasonably sure that they can recover the content from the received bits, and send STOP feedback to sender. Automatic adaptation: Receivers with larger loss rate need longer to receive the required information. Want that each receiver is able to recover from the minimum possible amount of received data, and do this efficiently.

  8. Universality and Efficiency [Universality] Want sequences of Fountain Codes for which the overhead is arbitrarily small [Efficiency] Want per-symbol-encoding to run in close to constant time, and decoding to run in time linear in number of output symbols.

  9. LT-Codes Invented by Michael Luby in 1998. First class of universal and almost efficient Fountain Codes Output distribution has a very simple form Encoding and decoding are very simple

  10. LT-codes use a restricted distribution on : Fix distribution on Distribution is given by where is the Hamming weight of Parameters of the code are LT-Codes

  11. Choose weight Prob Weight 1 0.055 2 0.3 Weight table 3 0.1 4 0.08 100000 0.0004 The LT Coding Process Input symbols Choose 2 Random original symbols XOR 2 Insert header, and send

  12. Input symbols d – fraction erasures Output symbols Raptor Codes Traditional pre-code LT-light

  13. Further Content How do Raptor Codes designed for the BEC perform on other symmetric channels (with BP decoding)? Information theoretic bounds, and fraction of nodes of degrees one and two in capacity-achieving Raptor Codes. Some examples Applications

  14. Raptor Code with parameters Channel Overhead , if decoding is possible from many output symbols. Parameters Measure residual error probability as a function of the overhead for a given channel.

  15. Incremental Redundancy Codes Raptor codes are true incremental redundancy codes. A sender can generate as many output bits as necessary for successful decoding. Suitably designed Raptor codes are close to the Shannon capacity for a variety of channel conditions (from very good to rather bad). Raptor codes are competitive with the best LDPC codes under different channel conditions.

  16. Type: Left-regular of degree 4, right Poisson, rate 0.98 Sequences Designed for the BEC Simulations done on AWGN() for various 

  17. Sequences Designed for the BEC Best designs (so far) Threshold Turbo Normalized SNR Eb/N0 0.067 0.135 0.000 0.267 0.391 0.459 0.522 0.584 0.650 0.194 0.331

  18. Sequences Designed for the BEC Not too bad, but quality decreases when the amount of noise on the channel increases. Need to design better distributions. Idea: adapt the Gaussian approximation technique of Chung, Richardson, and Urbanke.

  19. Gaussian Approximation Assume that the messages sent from input to output nodes are Gaussian. Track the mean of these Gaussians from one round to another. Degree distributions can be designed using this approximation. However, they don’t perform that well. Anything else we can do with this?

  20. and Nodes of Degree 2 Use equality, differentiate, and compare values at 0 where It can be rigorously proved that above condition is necessary for error probability of BP to converge to zero.

  21. Nodes of Degree 2 Use graph induced on input symbols by output symbols of degree 2.

  22. Nodes of Degree 2: BEC

  23. New output node of degree 2 Nodes of Degree 2: BEC Information Loss!

  24. If there exists component of linear size (i.e., a giant component), then next output node of degree 2 has constant probability of being useless. Therefore, graph should not have giant component. This means that for capacity achieving degree distributions we must have: On the other hand, if then algorithm cannot start successfully. So, for capacity-achieving codes: Fraction of Nodes of Degree 2

  25. The -ary symmetric channel (large ) Double verification decoding (Luby-Mitzenmacher): If and are correct, then they verify . Remove all of them from graph and continue. Can be shown that number of correct output symbols needs to be at least Times number of input symbols. (Joint work with Karp and Luby.)

  26. The -ary symmetric channel (large ) More sophisticated algorithms: induced graph! If two input symbols are connected by a correct output symbol, and each of them is connected to a correct output symbol of degree one, then the input symbols are verified. Remove from them from graph.

  27. The -ary symmetric channel (large ) More sophisticated algorithms: induced graph! More generally: if there is a path consisting of correct edges, and the two terminal nodes are connected to correct output symbols of degree one, then the input symbols get verified. (More complex algorithms.)

  28. The -ary symmetric channel (large ) Limiting case: Giant component consisting of correct edges, two correct output symbols of degree one “poke” the component. So, ideal distribution “achieves” capacity.

  29. Nodes of Degree 2 What is the fraction of nodes of degree 2 for capacity-achieving Raptor Codes? Turns out, that in the limit we need to have equality: where, in general and is the LLR of the channel.

  30. General Symmetric Channels: Mimic Proof Proof is information theoretic: if fraction of nodes of degree 2 is larger by a constant, then : • Expectation of the hyperbolic tangent of messages passed from input to output symbols at given round of BP is larger than a constant. • This shows that • So code cannot achieve capacity.

  31. Noisy observations of Therefore, if , and if denote output nodes of degree one, then So General Symmetric Channels: Mimic Proof Fraction of nodes of degree one for capacity-achieving Raptor Codes:

  32. A better Gaussian Approximation Uses adaptation of a method of Ardakani and Kschischang. Heuristic: Messages passed from input to output symbols are Gaussian, but not vice-versa. Find recursion for the means of these Gaussians, and apply linear programming.

  33. Raptor Codes for any Channel From the formula on the fraction of nodes of degree 2 we know that it is not possible to design truly universal Raptor codes. However, how does a Raptor code designed for a binary input memoryless symmetric channel perform for some other channel of this type? The question has been answered (joint work with O. Etesamit): if a code achieves capacity on the BEC, then its overhead over any binary input memoryless symmetric channel is not larger than log(e)-1 = 0.442…

  34. Conclusions For LT- and Raptor codes, some decoding algorithms can be phrased directly in terms of subgraphs of graphs induced by output symbols of degree 2. This leads to a simpler analysis without the use the tree assumption. For the BEC, and for the q-ary symmetric channel (large q) we obtain essentially the same limiting capacity-achieving degree distribution, using the giant component analysis. An information theoretic analysis gives the optimal fraction of output nodes of degree 2 for general memoryless symmetric channels. A graph analysis reveals very good degree distributions, which perform very well experimentally.

More Related