1 / 50

Network Coding: Mixin’ it up

Network Coding: Mixin’ it up. Sidharth Jaggi Michelle Effros Michael Langberg Tracey Ho. Muriel Médard. Peter Sanders. Philip Chou Kamal Jain. Ludo Tolhuizen Sebastian Egner. Network Coding. R. Ahlswede, N. Cai, S.-Y. R. Li and R. W. Yeung,

astro
Download Presentation

Network Coding: Mixin’ it up

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network Coding: Mixin’ it up Sidharth Jaggi Michelle Effros Michael Langberg Tracey Ho Muriel Médard Peter Sanders Philip Chou Kamal Jain Ludo Tolhuizen Sebastian Egner

  2. Network Coding R. Ahlswede, N. Cai, S.-Y. R. Li and R. W. Yeung, "Network information flow," IEEE Trans. on Information Theory, vol. 46, pp. 1204-1216, 2000. • http://tesla.csl.uiuc.edu/~koetter/NWC/Bibliography.html 131 papers as of last night(≈2 years) • NetCod Workshop, DIMACS working group, ISIT 2005 - 4+ sessions • Several patents, theses

  3. But . . . what is it? “The core notion of network coding is to allow and encourage mixing of data at intermediate network nodes. “ (Network Coding homepage)

  4. Point-to-point flows Min-cut Max-flow (Menger’s) Theorem [M27] Ford-Fulkerson Algorithm [FF62] C t s

  5. Multicasting Webcasting t1 t2 s1 Network P2P networks s|S| t|T| Sensor networks

  6. Justifications revisited - I s Throughput b1 b2 b1 b2 b1 b1+b2 b1 ? b2 b1 b1 b1+b2 b1+b2 t1 t2 [ACLY00] (b1,b2) b1 (b1,b2)

  7. Gap Without Coding s Example due to Sanders et al. (collaborators) . . . . . . Coding capacity = h Routing capacity≤2

  8. Multicasting Upper bound for multicast capacity C, C ≤ min{Ci} t1 [ACLY00] - achievable! C1 [LYC02] - linear codes suffice!! C2 t2 [KM01] - “finite field” linear codes suffice!!! s Network C|T| t|T|

  9. Multicasting F(2m)-linear network [KM01] b1 b2 bm Source:- Group together `m’ bits, Every node:- Perform linear combinations over finite field F(2m) β1 β2 βk

  10. Multicasting Upper bound for multicast capacity C, C ≤ min{Ci} t1 [ACLY00] - achievable! C1 [LYC02] - linear codes suffice!! C2 t2 [KM01] - “finite field” linear codes suffice!!! s Network [JCJ03],[SET03] - polynomial time code design!!!! C|T| t|T|

  11. Thms: Deterministic Codes For m ≥ log(|T|), exists an F(2m)-linear network which can be designed in O(|E||T|C(C+|T|)) time. [JCJ03],[SET03] [JCJ03],[LL03] Exist networks for which minimum m≈0.5(log(|T|))

  12. Justifications revisited - II s Robustness/Distributed design One link breaks t1 t2

  13. Justifications revisited - II s Robustness/Distributed design b1 b2 b1 b2 b1+b2 b1+2b2 b1 b2 (Finite field arithmetic) b1+b2 b1+b2 b1+2b2 t1 t2 (b1,b2) (b1,b2)

  14. Thm: Random Robust Codes t1 C1 C = min{Ci} C2 t2 Original Network s C|T| t|T|

  15. Thm: Random Robust Codes t1 C1' C' = min{Ci'} C2' t2 Faulty Network s If value of C' known to s, same code can achieve C' rate! (interior nodes oblivious) C|T|' t|T|

  16. Thm: Random Robust Codes m sufficiently large, rate R<C Choose random [ß] at each node b1 b2 bm  ’ b’1 b’2 b’m Probability over [ß] that code works >1-|E||T|2-m(C-R)+|V|  ’’ [JCJ03] [HKMKE03] (different notions of linearity) b’’1 b’’2 b’’m Much “sparser” linear operations (O(m) instead of O(m2)) [JCE06?] Decentralized design Vs. prob of error - necessary evil?

  17. Zero-error Decentralized Codes No a priori network topological information available - information can only be percolated down links Desired - zero-error code design One additional resource - each node vi has a unique ID number i (GPS coordinates/IP address/…) Need to use yet other types of linear codes [JHE06?]

  18. Inter-relationships between notions of linearity M Multicast G General Global Local I/O ≠ Local I/O = a Acyclic A Algebraic B Block C Convolutional Does not exist Є epsilon rate loss [JEHM04] B G G G Є G M a G M M a a G ? M a A C M a M a G

  19. Justifications revisited - III s Security Evil adversary hiding in network eavesdropping, injecting false information [JLHE05] t1 t2

  20. Greater throughput Robust against random errors . . . Aha! Network Coding!!!

  21. ? ? ?

  22. ? ? ? Xavier Yvonne Zorba

  23. Unicast Eureka • Code (X,Y,Z) • Message (X,Z) • Bad links (Z) • Coin (X) • Transmission (Y,Z) • Decode correctly (Y)

  24. Unicast |E| directed unit-capacity links ? ? ? Xavier Yvonne Zorba Zorba (hidden to Xavier/Yvonne) controls |Z| links Z. p = |Z|/|E| Xavier and Yvonne share no resources (private key, randomness) Zorba computationally unbounded; Xavier and Yvonne can only perform “simple” computations. Zorba knows protocols and already knows almost all of Xavier’s message (except Xavier’s private coin tosses) Goal: Transmit at “high” rate and w.h.p. decode correctly

  25. Background • Noisy channel models (Shannon,…) • Binary Symmetric Channel 1 C (Capacity) H(p) 0 1 0 0.5 1 p (“Noise parameter”)

  26. Background • Noisy channel models (Shannon,…) • Binary Symmetric Channel • Binary Erasure Channel 1 C (Capacity) 1-p 0 E 0 0.5 1 p (“Noise parameter”)

  27. Background • Adversarial channel models • “Limited-flip” adversary (Hamming,Gilbert-Varshanov,McEliece et al…) • Shared randomness, private key, computationally bounded adversary… 1 C (Capacity) 0 1 0 0.5 1 p (“Noise parameter”)

  28. Unicast - Results 1 C (Capacity) 0.5 0 1 0.5 p (“Noise parameter”) 1-p

  29. Unicast - Results 1 ? C (Capacity) ? ? 0.5 0 1 0.5 p (“Noise parameter”) 0

  30. Unicast - Results 1 C (Capacity) (Just for this talk, Zorba is causal) 0.5 0 1 0.5 p (“Noise parameter”)

  31. General Multicast Networks 1 C (Normalized by h) R1 0.5 Z S h 0 1 0.5 R|T| p = |Z|/h Slightly more intricate proof

  32. Unicast - Encoding |E|-|Z| |E| |E|-|Z|

  33. Unicast - Encoding |E| |E|-|Z| MDS Code Block-length n over finite field Fq T1 x1 Vandermonde matrix . . . |E| |E|-|Z| … X|E|-|Z| T|E| n(1-ε) n Symbol from Fq n(1-ε) n nε “Easy to use consistency information” Rate fudge-factor

  34. Unicast - Encoding … T1 r D1…D|E| Di=Ti(1).1+Ti(2).r+…+Ti(n(1- ε)).rn(1- ε) . . . r Di … T|E| r D1…D|E| Ti i nε

  35. Unicast - Transmission … T1 r D1…D|E| … T1’ r’ D1’…D|E|’ . . . . . . … T|E| r D1…D|E| … T|E|’ r’ D1’…D|E|’

  36. Unicast - Quick Decoding … T1’ r D1…D|E| . . . Choose majority (r,D1,…,D|E|) … T|E|’ r’ D1’…D|E|’ Di=Ti(1)’.1+Ti(2)’.r+…+Ti(n(1- ε))’.rn(1- ε) ? If so, accept Ti, else reject Ti Use accepted Tis to decode ∑k(Ti(k)-Ti(k)’).rk=0 Polynomial in rof degree n over Fq, value of r unknown to Zorba Probability of error < n/q<<1

  37. General Multicast Networks ? ? ?

  38. General Multicast Networks Observation: Can treat adversaries as new sources 1 C (Normalized by h) R1 S’1 0.5 S S’2 0 1 0.5 R|T| p = |Z|/h S’|Z|

  39. General Multicast Networks yi=Tix R1 S’1 y1 S S’2 x R|T| S’|Z|

  40. General Multicast Networks yi=Tix yi’=Tix+Ti’ai (x(1),x(2),…,x(n)) form a R-dimensional subspace X R1 S’1 a1 (ai(1),ai(2),…,ai(n)) form a |Z|-dimensional subspace Ai y1’ w.h.p. over network code design, TX and TAi do not intersect (robust codes…). S S’2 x w.h.p. over x(i),(y(1),y(2),…y(R+|Z|)) forms a basis for TXTAi R|T| But already know basis for TX, therefore can obtain basis for TAi S’|Z|

  41. Variations - Feedback 1 C 0 1 p

  42. Variations – Know thy enemy 1 1 C C 0 0 1 1 p p

  43. Variations – Omniscient but not Omnipresent 1 C Achievability: Gilbert-Varshamov, Algebraic Geometry Codes Converse: Generalized MRRW bound 0 1 0.5 p

  44. Variations – Random Noise S E P A R A T I O N CN C 0 1 p

  45. Ignorant Zorba - Results 1 Xp+Xs C (Capacity) 0.5 Xs 0 1 1 0.5 p (“Noise parameter”) 1-2p

  46. Ignorant Zorba - Results 1 Xp+Xs MDS code C (Capacity) a+b+c 0.5 a+2b+4c Xs a+3b+9c 0 1 1 0.5 p (“Noise parameter”) 1-2p

  47. Overview of results • Centralized design • Deterministic • Decentralized design • Randomized • Deterministic • Complexity • Lower bounds • Sparse codes • Types of linearity - interrelationships • Adversaries

  48. T H E E N D

More Related