450 likes | 607 Views
LT Codes. Paper by Michael Luby FOCS ‘02 Presented by Ashish Sabharwal Feb 26, 2003 CSE 590vg. Binary Erasure Channel. Code distance d ) can decode d-1 erasures Probabilistic Model Bits get erased with prob p
E N D
LT Codes Paper by Michael Luby FOCS ‘02 Presented byAshish Sabharwal Feb 26, 2003 CSE 590vg
Binary Erasure Channel • Code distance d) can decode d-1 erasures • Probabilistic Model • Bits get erased with prob p • (Shannon) Capacity of BEC = 1 – pIn particular, p>1/2 is decodable! encode BEC decode Input 00101 Received 10?001?? Input 00101 Codeword 10100101 “Packet loss”
LT Codes: Encoding 1 code bit input 1 1 XOR 0 = 1 1 degree d = 2 1 0 1 • Choose degree d from a distribution • Pick d neighbors uniformly at random • Compute XOR
LT Codes: Encoding codeword input 1 0 1 1 1 1 1 1 0 1 1 1 … …
LT Codes: Decoding codeword input ? 0 1 1 1 1 1 1 0 1 1 ? • Identify code bit of remaining degree 1 • Recover corresponding input bit
LT Codes: Decoding codeword input 1 = 0 XOR 1 1 1 1 1 1 1 0 1 1 • Update neighbors of this input bit • Delete edges • Repeat
LT Codes: Decoding codeword input 1 1 1 1 1 1 1 0 1 1
LT Codes: Decoding codeword input 0 1 0 1 1 1 0 = 1 XOR 1 0 1 1 Decoding unsuccessful!
LT Codes: Features • Binary, efficient • Bits can arrive in any order • Probabilistic model • No preset rate • Generate as many or as few code bitsas required by the channel • Almost optimal RS inefficient Tornado codes are optimal and linear time, but have fixed rate
Larger Encoding Alphabet Why? Less overhead • Partition input into m-bit chunks • Encoding symbol is bit-wise XOR We’ll think of these as binary codes
Caveat: Transmitting the Graph • Send degree + list of neighbors • Associate a key with each code bit • Encoder and decoder apply the samefunction to the key to compute neighbors • Share random seed for pseudo-randomgenerator
Outline • The Goal • All 1’s distribution: Balls and Bins case • LT Process; Probabilistic machinery • Ideal Soliton Distribution • Robust Soliton Distribution
The Goal Construct a degree distribution s.t. • Few encoding bits required for recovery= small t • Few bit operations needed= small sum of degrees= small s
All 1’s distribution: Balls and Bins All encoding degrees are 1 t unerased code bits k bit input • t balls thrown into k bins • Pr [can’t recover input] • Pr [no green input bits] • = k . (1 – 1/k)t • ¼ k e-t/k • Pr [failure] d guaranteed if t k ln k/d k bins t balls
All 1’s distribution: Balls and Bins • t = k ln (k/d) • s = k ln (k/d) BAD Too much overhead k + k ln2(k/d) suffices GOOD Optimal!
Why is s = k ln (k/d) optimal? k bit input • s balls thrown into k bins • can’t recover input • ) no green input bits • Pr [no green input bits] • k . (1 – 1/k)s • ¼ k e-s/k • Pr [failure] d if s k ln k/d s edges k bins s balls NOTE: This line of reasoning is not quite right for lower bound! Use coupon collector type argument.
The LT Process c1 a1 c2 a2 c3 a3 c4 a4 c5 a5 c6 covered = { } processed = { } ripple = { } released = { } STATE: ACTION: Init: Release c2, c4, c6
The LT Process c1 a1 c2 a2 c3 a3 c4 a4 c5 a5 c6 released = {c2,c4,c6} covered = {a1,a3,a5} processed = { } ripple = {a1,a3,a5} STATE: ACTION: Process a1
The LT Process c1 a1 c2 a2 c3 a3 c4 a4 c5 a5 c6 released = {c2,c4,c6,c1} covered = {a1,a3,a5} processed = {a1} ripple = {a3,a5} STATE: ACTION: Process a3
The LT Process c1 a1 c2 a2 c3 a3 c4 a4 c5 a5 c6 released = {c2,c4,c6,c1} covered = {a1,a3,a5} processed = {a1,a3} ripple = {a5} STATE: ACTION: Process a5
The LT Process c1 a1 c2 a2 c3 a3 c4 a4 c5 a5 c6 released = {c2,c4,c6,c1,c5} covered = {a1,a3,a5,a4} processed = {a1,a3,a5} ripple = {a4} STATE: ACTION: Process a4
The LT Process c1 a1 c2 a2 c3 a3 c4 a4 c5 a5 c6 released = {c2,c4,c6,c1,c5,c3} covered = {a1,a3,a5,a4,a2} processed = {a1,a3,a5,a4} ripple = {a2} STATE: ACTION: Process a2
The LT Process c1 a1 c2 a2 c3 a3 c4 a4 c5 a5 c6 released = {c2,c4,c6,c1,c5,c3} covered = {a1,a3,a5,a4,a2} processed = {a1,a3,a5,a4,a2} ripple = { } STATE: ACTION: Success!
The LT Process: Properties • Corresponds to decoding • When a code bit cp is released • The step at which this happensis independent of other cq’s • The input bit cp coversis independent of other cq’s
Ripple size • Desired property of ripple • Not too large: redundant covering • Not too small: might die prematurely • GOAL: “Good” degree distribution • Ripple doesn’t grow or shrink • 1 input bit added per step Why??
Degree Distributions • Degrees of code bits chosen independently • r(d) = Pr [degree = d] • All 1’s distribution: r(1) = 1, r(d1) = 0 initial ripple = all input bits “All-At-Once distribution”
Machinery: q(d,L), r(d,L), r(L) • L = | unprocessed | k, k-1, …,1 • q(d,L) = Pr [ cp is released at L | deg(cp)=d} • r(d,L) = Pr [ cp is released at L, deg(cp)=d} = r(d) q(d,L) • r(L) = Pr [cp is released at L] = åd r(d,L) r(L) controls ripple size
Ideal Soliton Distribution, r(.) “Soliton Wave”: dispersion balances refraction • Expected degree = ln k • r(L) = 1/k for all L = k, …, 1
Expected Behavior optimal • Choose t = k • Exp(s) = t Exp(deg) = k ln k • Exp(Initial ripple size) = t r(1) = 1 • Exp(# code bits released per step) = t r(L) = 1 ) Exp(ripple size) = 1
We expect too much… • What if the ripple vanishes too soon? • In fact, very likely! • FIX: Robust Soliton Distribution • Higher initial ripple size ¼k log k/d • Expected change still 0
Robust Soliton Distribution, m(.) • R = c k ln k/d • m(d) = (r(d) + t(d)) / b where • t = kb
Robust Soliton Distribution, m(.) • t is small • t = kb k + O(k ln2 k/d) • Exp(s) is small • Exp(s) = t åd dm(d) = O(k ln k/d)
Robust Soliton Distribution, m(.) • Initial ripple size is not too small • Exp(Initial ripple size) = t m(1)¼ R ¼k ln k/d • Ripple unlikely to vanish • Ripple size = random walk of length k • Deviates from it’s mean by k ln k/d with prob d
Robust Release Probability t r(L) L / (L – qR) for L ¸ R, const q 0 t åL=R..2R r(L) g R ln R/d for const g > 0 Proofs on board…
Pessimistic Filtering • Let Z = ripple size when L bits unprocessed • Let h = Pr [released code bit covers input bit not in ripple]h should be around (L – Z) / L If h is lowered to any value (L – Z)/Lthen Pr[success] doesn’t increase
Pessimistic Filtering • Applying to robust release probability: • t r(L) ¸ L/(L – qR) turns into t r(L) = L/(L – qR)for worst case analysis • Will use pessimistic filtering again later
Main Theorem: Pr[success] ¸ 1–d Idea: ripple size is like a random walkof length k with mean R ¼k ln k/d • Initial ripple size ¸qR/2 with prob ¸ 1–d/3 • Chernoff bound: # of code bits of deg 1 • Ripple does not vanish for L ¸ Rwith prob ¸ 1–d/3 • Last R input bits are covered by t(k/R) spikewith prob ¸ 1–d/3
Ripple does not vanish for L ¸ R • Let XL = | {code bits released at L} | • Exp(XL) = L / (L – qR) • Let YL = 0-1 random variable with Pr [YL = 0] = (L – qR) / L • Let I = any end interval of {R, …, k-1} starting at L RipplesizeL = qR/2 + (åL’ 2 I XL’ YL’) – (k–L) Filtered down init ripplesize
Ripple does not vanish for L ¸ R | åL’ 2 I XL’ YL’ – (k–L) | · | åL’ 2 I (XL’ YL’ – Exp(XL’) YL’) | + | åL’ 2 I (Exp(XL’) YL’ – Exp(XL’) Exp(YL’)) | + | åL’ 2 I (Exp(XL’) Exp(YL’)) – (k–L) | ¸qR/4 with prob ·d/(6k) = 0 Pr [| åL’ 2 I XL’ YL’ – (k–L) |¸qR/2] ·d/(3k)
Ripple does not vanish for L ¸ R • Recall • RipplesizeL = qR/2 + åL’ 2 I XL’ YL’ – (k–L) • There are k–R intervals I • Pr [Summation ¸qR/2 for some I] ·d/3 0 < RipplesizeL < qR with prob ¸ 1–d/3 Ripple doesn’t vanish!
Main Theorem: Pr[success] ¸ 1–d Idea: ripple size is like a random walkof length k with mean R ¼k ln k/d • Initial ripple size ¸qR/2 with prob ¸ 1–d/3 • Chernoff bound: # of code bits of deg 1 • Ripple does not vanish for L ¸ Rwith prob ¸ 1–d/3 • Last R input bits are covered by t(k/R) spikewith prob ¸ 1–d/3
Last R input bits are covered • Recallt åL=R..2R r(L) g R ln R/d • By argument similar to Balls and Bins,Pr [Last R input bits not covered] · 1 – d/3
Main Theorem With Robust Soliton Distribution, the LT Process succeeds with prob ¸ 1 – d t = k + O(k ln2 k/d) s = O(k ln k/d)