1 / 44

LT Codes

LT Codes. Paper by Michael Luby FOCS ‘02 Presented by Ashish Sabharwal Feb 26, 2003 CSE 590vg. Binary Erasure Channel. Code distance d ) can decode d-1 erasures Probabilistic Model Bits get erased with prob p

walda
Download Presentation

LT Codes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LT Codes Paper by Michael Luby FOCS ‘02 Presented byAshish Sabharwal Feb 26, 2003 CSE 590vg

  2. Binary Erasure Channel • Code distance d) can decode d-1 erasures • Probabilistic Model • Bits get erased with prob p • (Shannon) Capacity of BEC = 1 – pIn particular, p>1/2 is decodable! encode BEC decode Input 00101 Received 10?001?? Input 00101 Codeword 10100101 “Packet loss”

  3. LT Codes: Encoding 1 code bit input 1 1 XOR 0 = 1 1 degree d = 2 1 0 1 • Choose degree d from a distribution • Pick d neighbors uniformly at random • Compute XOR

  4. LT Codes: Encoding codeword input 1 0 1 1 1 1 1 1 0 1 1 1 … …

  5. LT Codes: Decoding codeword input ? 0 1 1 1 1 1 1 0 1 1 ? • Identify code bit of remaining degree 1 • Recover corresponding input bit

  6. LT Codes: Decoding codeword input 1 = 0 XOR 1 1 1 1 1 1 1 0 1 1 • Update neighbors of this input bit • Delete edges • Repeat

  7. LT Codes: Decoding codeword input 1 1 1 1 1 1 1 0 1 1

  8. LT Codes: Decoding codeword input 0 1 0 1 1 1 0 = 1 XOR 1 0 1 1 Decoding unsuccessful!

  9. LT Codes: Features • Binary, efficient • Bits can arrive in any order • Probabilistic model • No preset rate • Generate as many or as few code bitsas required by the channel • Almost optimal RS inefficient Tornado codes are optimal and linear time, but have fixed rate

  10. Larger Encoding Alphabet Why? Less overhead • Partition input into m-bit chunks • Encoding symbol is bit-wise XOR We’ll think of these as binary codes

  11. Caveat: Transmitting the Graph • Send degree + list of neighbors • Associate a key with each code bit • Encoder and decoder apply the samefunction to the key to compute neighbors • Share random seed for pseudo-randomgenerator

  12. Outline • The Goal • All 1’s distribution: Balls and Bins case • LT Process; Probabilistic machinery • Ideal Soliton Distribution • Robust Soliton Distribution

  13. The Goal Construct a degree distribution s.t. • Few encoding bits required for recovery= small t • Few bit operations needed= small sum of degrees= small s

  14. All 1’s distribution: Balls and Bins All encoding degrees are 1 t unerased code bits k bit input • t balls thrown into k bins • Pr [can’t recover input] •  Pr [no green input bits] • = k . (1 – 1/k)t • ¼ k e-t/k • Pr [failure] d guaranteed if t  k ln k/d k bins t balls

  15. All 1’s distribution: Balls and Bins • t = k ln (k/d) • s = k ln (k/d) BAD Too much overhead k + k ln2(k/d) suffices GOOD Optimal!

  16. Why is s = k ln (k/d) optimal? k bit input • s balls thrown into k bins • can’t recover input • ) no green input bits • Pr [no green input bits] •  k . (1 – 1/k)s • ¼ k e-s/k • Pr [failure] d if s  k ln k/d s edges k bins s balls NOTE: This line of reasoning is not quite right for lower bound! Use coupon collector type argument.

  17. The LT Process c1 a1 c2 a2 c3 a3 c4 a4 c5 a5 c6 covered = { } processed = { } ripple = { } released = { } STATE: ACTION: Init: Release c2, c4, c6

  18. The LT Process c1 a1 c2 a2 c3 a3 c4 a4 c5 a5 c6 released = {c2,c4,c6} covered = {a1,a3,a5} processed = { } ripple = {a1,a3,a5} STATE: ACTION: Process a1

  19. The LT Process c1 a1 c2 a2 c3 a3 c4 a4 c5 a5 c6 released = {c2,c4,c6,c1} covered = {a1,a3,a5} processed = {a1} ripple = {a3,a5} STATE: ACTION: Process a3

  20. The LT Process c1 a1 c2 a2 c3 a3 c4 a4 c5 a5 c6 released = {c2,c4,c6,c1} covered = {a1,a3,a5} processed = {a1,a3} ripple = {a5} STATE: ACTION: Process a5

  21. The LT Process c1 a1 c2 a2 c3 a3 c4 a4 c5 a5 c6 released = {c2,c4,c6,c1,c5} covered = {a1,a3,a5,a4} processed = {a1,a3,a5} ripple = {a4} STATE: ACTION: Process a4

  22. The LT Process c1 a1 c2 a2 c3 a3 c4 a4 c5 a5 c6 released = {c2,c4,c6,c1,c5,c3} covered = {a1,a3,a5,a4,a2} processed = {a1,a3,a5,a4} ripple = {a2} STATE: ACTION: Process a2

  23. The LT Process c1 a1 c2 a2 c3 a3 c4 a4 c5 a5 c6 released = {c2,c4,c6,c1,c5,c3} covered = {a1,a3,a5,a4,a2} processed = {a1,a3,a5,a4,a2} ripple = { } STATE: ACTION: Success!

  24. The LT Process: Properties • Corresponds to decoding • When a code bit cp is released • The step at which this happensis independent of other cq’s • The input bit cp coversis independent of other cq’s

  25. Ripple size • Desired property of ripple • Not too large: redundant covering • Not too small: might die prematurely • GOAL: “Good” degree distribution • Ripple doesn’t grow or shrink • 1 input bit added per step Why??

  26. Degree Distributions • Degrees of code bits chosen independently • r(d) = Pr [degree = d] • All 1’s distribution: r(1) = 1, r(d1) = 0 initial ripple = all input bits “All-At-Once distribution”

  27. Machinery: q(d,L), r(d,L), r(L) • L = | unprocessed | k, k-1, …,1 • q(d,L) = Pr [ cp is released at L | deg(cp)=d} • r(d,L) = Pr [ cp is released at L, deg(cp)=d} = r(d) q(d,L) • r(L) = Pr [cp is released at L] = åd r(d,L) r(L) controls ripple size

  28. q(d,L)

  29. Ideal Soliton Distribution, r(.) “Soliton Wave”: dispersion balances refraction • Expected degree = ln k • r(L) = 1/k for all L = k, …, 1

  30. Expected Behavior optimal • Choose t = k • Exp(s) = t Exp(deg) = k ln k • Exp(Initial ripple size) = t r(1) = 1 • Exp(# code bits released per step) = t r(L) = 1 ) Exp(ripple size) = 1

  31. We expect too much… • What if the ripple vanishes too soon? • In fact, very likely! • FIX: Robust Soliton Distribution • Higher initial ripple size ¼k log k/d • Expected change still 0

  32. Robust Soliton Distribution, m(.) • R = c k ln k/d • m(d) = (r(d) + t(d)) / b where • t = kb

  33. Robust Soliton Distribution, m(.) • t is small • t = kb k + O(k ln2 k/d) • Exp(s) is small • Exp(s) = t åd dm(d) = O(k ln k/d)

  34. Robust Soliton Distribution, m(.) • Initial ripple size is not too small • Exp(Initial ripple size) = t m(1)¼ R ¼k ln k/d • Ripple unlikely to vanish • Ripple size = random walk of length k • Deviates from it’s mean by k ln k/d with prob  d

  35. Robust Release Probability t r(L)  L / (L – qR) for L ¸ R, const q  0 t åL=R..2R r(L)  g R ln R/d for const g > 0 Proofs on board…

  36. Pessimistic Filtering • Let Z = ripple size when L bits unprocessed • Let h = Pr [released code bit covers input bit not in ripple]h should be around (L – Z) / L If h is lowered to any value  (L – Z)/Lthen Pr[success] doesn’t increase

  37. Pessimistic Filtering • Applying to robust release probability: • t r(L) ¸ L/(L – qR) turns into t r(L) = L/(L – qR)for worst case analysis • Will use pessimistic filtering again later

  38. Main Theorem: Pr[success] ¸ 1–d Idea: ripple size is like a random walkof length k with mean R ¼k ln k/d • Initial ripple size ¸qR/2 with prob ¸ 1–d/3 • Chernoff bound: # of code bits of deg 1 • Ripple does not vanish for L ¸ Rwith prob ¸ 1–d/3 • Last R input bits are covered by t(k/R) spikewith prob ¸ 1–d/3

  39. Ripple does not vanish for L ¸ R • Let XL = | {code bits released at L} | • Exp(XL) = L / (L – qR) • Let YL = 0-1 random variable with Pr [YL = 0] = (L – qR) / L • Let I = any end interval of {R, …, k-1} starting at L RipplesizeL = qR/2 + (åL’ 2 I XL’ YL’) – (k–L) Filtered down init ripplesize

  40. Ripple does not vanish for L ¸ R | åL’ 2 I XL’ YL’ – (k–L) | · | åL’ 2 I (XL’ YL’ – Exp(XL’) YL’) | + | åL’ 2 I (Exp(XL’) YL’ – Exp(XL’) Exp(YL’)) | + | åL’ 2 I (Exp(XL’) Exp(YL’)) – (k–L) | ¸qR/4 with prob ·d/(6k) = 0 Pr [| åL’ 2 I XL’ YL’ – (k–L) |¸qR/2] ·d/(3k)

  41. Ripple does not vanish for L ¸ R • Recall • RipplesizeL = qR/2 + åL’ 2 I XL’ YL’ – (k–L) • There are k–R intervals I • Pr [Summation ¸qR/2 for some I] ·d/3 0 < RipplesizeL < qR with prob ¸ 1–d/3 Ripple doesn’t vanish!

  42. Main Theorem: Pr[success] ¸ 1–d Idea: ripple size is like a random walkof length k with mean R ¼k ln k/d • Initial ripple size ¸qR/2 with prob ¸ 1–d/3 • Chernoff bound: # of code bits of deg 1 • Ripple does not vanish for L ¸ Rwith prob ¸ 1–d/3 • Last R input bits are covered by t(k/R) spikewith prob ¸ 1–d/3

  43. Last R input bits are covered • Recallt åL=R..2R r(L)  g R ln R/d • By argument similar to Balls and Bins,Pr [Last R input bits not covered] · 1 – d/3

  44. Main Theorem With Robust Soliton Distribution, the LT Process succeeds with prob ¸ 1 – d t = k + O(k ln2 k/d) s = O(k ln k/d)

More Related