380 likes | 396 Views
A study on improving network throughput and resilience against errors using innovative coding techniques.
E N D
Correction of Adversarial Errors in Networks Sidharth Jaggi Michael Langberg Tracey Ho Michelle Effros Submitted to ISIT 2005
Greater throughput Robust against random errors Aha! Network Coding!!!
? ? ?
? ? ? Xavier Yvonne Zorba
Background • Noisy channel models (Shannon,…) • Binary Symmetric Channel 1 C (Capacity) H(p) 0 1 0 0.5 1 p (“Noise parameter”)
Background • Noisy channel models (Shannon,…) • Binary Symmetric Channel • Binary Erasure Channel 1 C (Capacity) 1-p 0 E 0 0.5 1 p (“Noise parameter”)
Background • Adversarial channel models • “Limited-flip” adversary (Hamming,Gilbert-Varshanov,McEliece et al…) • Shared randomness, private key, computationally bounded adversary… 1 C (Capacity) 0 1 0 0.5 1 p (“Noise parameter”)
Model 1 |E| directed unit-capacity links ? ? ? Xavier Yvonne Zorba Zorba (hidden to Xavier/Yvonne) controls |Z| links Z. p = |Z|/|E| Xavier and Yvonne share no resources (private key, randomness) Zorba computationally unbounded; Xavier and Yvonne can only perform “simple” computations. Zorba knows protocols and already knows almost all of Xavier’s message (except Xavier’s private coin tosses)
Model 1 – Xavier/Yvonne’s Goal ? ? ? Xavier Yvonne Zorba Knowing |Z| but not Z, to come up with an encoding/decoding scheme that allows a maximal rate of information to be decoded correctly with high probability. “Normalized” rate (divide by number of links |E|)
Model 1 - Results 1 C (Capacity) 0.5 0 1 0.5 p (“Noise parameter”)
Model 1 - Results 1 C (Capacity) 0.5 0 1 0.5 p (“Noise parameter”)
Model 1 - Results ? ? ? 1 C (Capacity) 0.5 Probability of error = 0.5 0 1 0.5 p (“Noise parameter”)
Model 1 - Results 1 C (Capacity) 0.5 0 1 0.5 p (“Noise parameter”)
Model 1 - Results Eureka 1 C (Capacity) 0.5 0 1 0.5 p (“Noise parameter”)
Model 1 - Encoding |E|-|Z| |E| |E|-|Z|
Model 1 - Encoding |E| |E|-|Z| MDS Code Block length n over finite field Fq T1 x1 Vandermonde matrix . . . |E| |E|-|Z| … X|E|-|Z| T|E| n(1-ε) n Symbol from Fq n(1-ε) n nε “Easy to use consistency information” Rate fudge-factor
Model 1 - Encoding T1 . . . |E| T|E| n(1-ε) nε “Easy to use consistency information”
Model 1 - Encoding … T1 r1 D11…D1|E| Dij=Tj(1).1+Tj(2).ri+…+Tj(n(1- ε)).rin(1- ε) . . . ri Dij … T|E| r|E| D|E|1…D|E||E| … Tj j nε
Model 1 - Encoding … T1 r1 D11…D1|E| Dij=Tj(1).1+Tj(2).ri+…+Tj(n(1- ε)).rin(1- ε) . . . i ri Dij … T|E| r|E| D|E|1…D|E||E| … Tj nε
Model 1 - Transmission … T1 r1 D11…D1|E| … T1’ r1’ D11’…D1|E|’ . . . . . . … T|E| r|E| D|E|1…D|E||E| … T|E|’ r|E|’ D|E|1’…D|E||E|’
Model 1 - Decoding … T1’ r1’ D11’…D1|E|’ . . . … T|E|’ r|E|’ D|E|1’…D|E||E|’ Dij’=Tj(1)’.1+Tj(2)’.ri’+…+Tj(n(1- ε))’.ri’n(1- ε) ? ri’ Dij’ … Tj’ “Quick consistency check”
Model 1 - Decoding … T1’ r1’ D11’…D1|E|’ . . . … T|E|’ r|E|’ D|E|1’…D|E||E|’ Dij’=Tj(1)’.1+Tj(2)’.ri’+…+Tj(n(1- ε))’.ri’n(1- ε) ? … Tj’ Dji’=Ti(1)’.1+Ti(2)’.rj’+…+Ti(n(1- ε))’.rj’n(1- ε) ? ri’ Dij’ “Quick consistency check”
Model 1 - Decoding … T1’ r1’ D11’…D1|E|’ . . . … T|E|’ r|E|’ D|E|1’…D|E||E|’ Consistency graph Dij’=Tj(1)’.1+Tj(2)’.ri’+…+Tj(n(1- ε))’.ri’n(1- ε) Dji’=Ti(1)’.1+Ti(2)’.rj’+…+Ti(n(1- ε))’.rj’n(1- ε) Edge i consistent with edge j
Model 1 - Decoding (Self-loops… not important) 1 1 3 2 … T1’ r1’ D11’…D1|E|’ T r,D 2 . . . 3 4 4 T r,D 5 5 … T|E|’ r|E|’ D|E|1’…D|E||E|’ T r,D Consistency graph Dij’=Tj(1)’.1+Tj(2)’.ri’+…+Tj(n(1- ε))’.ri’n(1- ε) Dji’=Ti(1)’.1+Ti(2)’.rj’+…+Ti(n(1- ε))’.rj’n(1- ε) Edge i consistent with edge j
Model 1 - Decoding 1 1 3 2 … T1’ r1’ D11’…D1|E|’ T r,D 2 . . . 3 4 4 T r,D 5 5 … T|E|’ r|E|’ D|E|1’…D|E||E|’ T r,D Consistency graph Detection – select vertices connected to at least |E|/2 other vertices in the consistency graph. Decode using Tis on corresponding edges.
Model 1 - Proof 1 1 3 2 … T1’ r1’ D11’…D1|E|’ T r,D 2 . . . 3 4 4 T r,D 5 5 … T|E|’ r|E|’ D|E|1’…D|E||E|’ T r,D Consistency graph Dij=Tj(1).1+Tj(2).ri+…+Tj(n(1- ε)).rin(1- ε) ∑k(Tj(k)-Tj(k)’).rik=0 Dij=Tj(1)’.1+Tj(2)’.ri+…+Tj(n(1- ε))’.rin(1- ε) Polynomial in ri of degree n over Fq, value of ri unknown to Zorba Probability of error < n/q<<1
Variations - Feedback 1 C 0 1 p
Variations – Know thy enemy 1 1 C C 0 0 1 1 p p
Variations – Random Noise S E P A R A T I O N CN C 0 1 p
Model 2 - Multicast ? ? ?
Model 2 - Results 1 C (Normalized by h) R1 0.5 Z S h 0 1 0.5 R|T| p = |Z|/h
Model 2 - Results 1 C (Normalized by h) R1 0.5 S 0 1 0.5 R|T| p = |Z|/h
Model 2 – Sketch of Proof Lemma 1: There exists an easy random design of network codes such that for any Z of size < h/2, if Z is known, each decoder can decode. R1 S’1 Easy S S’2 Lemma 2: Using similar consistency check arguments as in Model 1, Z can be detected. R|T| S’|Z| Hard
T H E E N D