1 / 38

Correction of Adversarial Errors in Networks

A study on improving network throughput and resilience against errors using innovative coding techniques.

jcrawley
Download Presentation

Correction of Adversarial Errors in Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Correction of Adversarial Errors in Networks Sidharth Jaggi Michael Langberg Tracey Ho Michelle Effros Submitted to ISIT 2005

  2. Greater throughput Robust against random errors Aha! Network Coding!!!

  3. ? ? ?

  4. ? ? ? Xavier Yvonne Zorba

  5. Background • Noisy channel models (Shannon,…) • Binary Symmetric Channel 1 C (Capacity) H(p) 0 1 0 0.5 1 p (“Noise parameter”)

  6. Background • Noisy channel models (Shannon,…) • Binary Symmetric Channel • Binary Erasure Channel 1 C (Capacity) 1-p 0 E 0 0.5 1 p (“Noise parameter”)

  7. Background • Adversarial channel models • “Limited-flip” adversary (Hamming,Gilbert-Varshanov,McEliece et al…) • Shared randomness, private key, computationally bounded adversary… 1 C (Capacity) 0 1 0 0.5 1 p (“Noise parameter”)

  8. Model 1 |E| directed unit-capacity links ? ? ? Xavier Yvonne Zorba Zorba (hidden to Xavier/Yvonne) controls |Z| links Z. p = |Z|/|E| Xavier and Yvonne share no resources (private key, randomness) Zorba computationally unbounded; Xavier and Yvonne can only perform “simple” computations. Zorba knows protocols and already knows almost all of Xavier’s message (except Xavier’s private coin tosses)

  9. Model 1 – Xavier/Yvonne’s Goal ? ? ? Xavier Yvonne Zorba Knowing |Z| but not Z, to come up with an encoding/decoding scheme that allows a maximal rate of information to be decoded correctly with high probability. “Normalized” rate (divide by number of links |E|)

  10. Model 1 - Results 1 C (Capacity) 0.5 0 1 0.5 p (“Noise parameter”)

  11. Model 1 - Results 1 C (Capacity) 0.5 0 1 0.5 p (“Noise parameter”)

  12. Model 1 - Results ? ? ? 1 C (Capacity) 0.5 Probability of error = 0.5 0 1 0.5 p (“Noise parameter”)

  13. Model 1 - Results 1 C (Capacity) 0.5 0 1 0.5 p (“Noise parameter”)

  14. Model 1 - Results Eureka 1 C (Capacity) 0.5 0 1 0.5 p (“Noise parameter”)

  15. Model 1 - Encoding |E|-|Z| |E| |E|-|Z|

  16. Model 1 - Encoding |E| |E|-|Z| MDS Code Block length n over finite field Fq T1 x1 Vandermonde matrix . . . |E| |E|-|Z| … X|E|-|Z| T|E| n(1-ε) n Symbol from Fq n(1-ε) n nε “Easy to use consistency information” Rate fudge-factor

  17. Model 1 - Encoding T1 . . . |E| T|E| n(1-ε) nε “Easy to use consistency information”

  18. Model 1 - Encoding … T1 r1 D11…D1|E| Dij=Tj(1).1+Tj(2).ri+…+Tj(n(1- ε)).rin(1- ε) . . . ri Dij … T|E| r|E| D|E|1…D|E||E| … Tj j nε

  19. Model 1 - Encoding … T1 r1 D11…D1|E| Dij=Tj(1).1+Tj(2).ri+…+Tj(n(1- ε)).rin(1- ε) . . . i ri Dij … T|E| r|E| D|E|1…D|E||E| … Tj nε

  20. Model 1 - Transmission … T1 r1 D11…D1|E| … T1’ r1’ D11’…D1|E|’ . . . . . . … T|E| r|E| D|E|1…D|E||E| … T|E|’ r|E|’ D|E|1’…D|E||E|’

  21. Model 1 - Decoding … T1’ r1’ D11’…D1|E|’ . . . … T|E|’ r|E|’ D|E|1’…D|E||E|’ Dij’=Tj(1)’.1+Tj(2)’.ri’+…+Tj(n(1- ε))’.ri’n(1- ε) ? ri’ Dij’ … Tj’ “Quick consistency check”

  22. Model 1 - Decoding … T1’ r1’ D11’…D1|E|’ . . . … T|E|’ r|E|’ D|E|1’…D|E||E|’ Dij’=Tj(1)’.1+Tj(2)’.ri’+…+Tj(n(1- ε))’.ri’n(1- ε) ? … Tj’ Dji’=Ti(1)’.1+Ti(2)’.rj’+…+Ti(n(1- ε))’.rj’n(1- ε) ? ri’ Dij’ “Quick consistency check”

  23. Model 1 - Decoding … T1’ r1’ D11’…D1|E|’ . . . … T|E|’ r|E|’ D|E|1’…D|E||E|’ Consistency graph Dij’=Tj(1)’.1+Tj(2)’.ri’+…+Tj(n(1- ε))’.ri’n(1- ε) Dji’=Ti(1)’.1+Ti(2)’.rj’+…+Ti(n(1- ε))’.rj’n(1- ε) Edge i consistent with edge j

  24. Model 1 - Decoding (Self-loops… not important) 1 1 3 2 … T1’ r1’ D11’…D1|E|’ T r,D 2 . . . 3 4 4 T r,D 5 5 … T|E|’ r|E|’ D|E|1’…D|E||E|’ T r,D Consistency graph Dij’=Tj(1)’.1+Tj(2)’.ri’+…+Tj(n(1- ε))’.ri’n(1- ε) Dji’=Ti(1)’.1+Ti(2)’.rj’+…+Ti(n(1- ε))’.rj’n(1- ε) Edge i consistent with edge j

  25. Model 1 - Decoding 1 1 3 2 … T1’ r1’ D11’…D1|E|’ T r,D 2 . . . 3 4 4 T r,D 5 5 … T|E|’ r|E|’ D|E|1’…D|E||E|’ T r,D Consistency graph Detection – select vertices connected to at least |E|/2 other vertices in the consistency graph. Decode using Tis on corresponding edges.

  26. Model 1 - Proof 1 1 3 2 … T1’ r1’ D11’…D1|E|’ T r,D 2 . . . 3 4 4 T r,D 5 5 … T|E|’ r|E|’ D|E|1’…D|E||E|’ T r,D Consistency graph Dij=Tj(1).1+Tj(2).ri+…+Tj(n(1- ε)).rin(1- ε) ∑k(Tj(k)-Tj(k)’).rik=0 Dij=Tj(1)’.1+Tj(2)’.ri+…+Tj(n(1- ε))’.rin(1- ε) Polynomial in ri of degree n over Fq, value of ri unknown to Zorba Probability of error < n/q<<1

  27. Variations - Feedback 1 C 0 1 p

  28. Variations – Know thy enemy 1 1 C C 0 0 1 1 p p

  29. Variations – Random Noise S E P A R A T I O N CN C 0 1 p

  30. Model 2 - Multicast ? ? ?

  31. Model 2 - Results 1 C (Normalized by h) R1 0.5 Z S h 0 1 0.5 R|T| p = |Z|/h

  32. Model 2 - Results 1 C (Normalized by h) R1 0.5 S 0 1 0.5 R|T| p = |Z|/h

  33. Model 2 – Sketch of Proof Lemma 1: There exists an easy random design of network codes such that for any Z of size < h/2, if Z is known, each decoder can decode. R1 S’1 Easy S S’2 Lemma 2: Using similar consistency check arguments as in Model 1, Z can be detected. R|T| S’|Z| Hard

  34. T H E E N D

More Related