220 likes | 422 Views
Data Persistence in Sensor Networks: Towards Optimal Encoding for Data Recovery in Partial Network Failures . Abhinav Kamra, Jon Feldman, Vishal Misra and Dan Rubenstein DNA Research Group, Columbia University. Motivation and Model. Typical Scenario of Sensor Networks
E N D
Data Persistence in Sensor Networks: Towards Optimal Encoding for Data Recovery in Partial Network Failures Abhinav Kamra, Jon Feldman, Vishal Misra and Dan Rubenstein DNA Research Group, Columbia University
Motivation and Model Typical Scenario of Sensor Networks • Large number of nodes deployed to ``sense'' environment • Data collected periodically pulled/pushed through a sink/gateway node • Nodes prone to failure (disaster, battery life, targeted attack) Want data to survive individual node failures ``Data Persistence''
Overview • Erasure codes • LT-Codes • Soliton distribution • Coding for failure-prone sensor networks • Major results • A brief sketch of proofs • A case study of failure-prone sensor networks
n Erasure Codes n Message Encoding Algorithm cn Encoding Transmission Received Decoding Algorithm n Message
Luby Transform Codes • Simple Linear Codes • Improvement over “Tornado codes” • Rateless Codes
Erasure Codes: LT-Codes b1 F= b2 b3 b4 b5 n=5input blocks
LT-Codes: Encoding • Pick degreed1 from a pre-specified distribution. (d1=2) • Select d1 input blocks uniformly at random. (Pick b1 and b4 ) • Compute their sum (XOR). • Output sum, block IDs E(F)= c1 b1 F= b2 b3 b4 b5
c1 c2 c3 c4 c5 c6 c7 b1 F= b2 b3 b4 b5 LT-Codes: Encoding E(F)=
E(F)= c1 c1 c1 c1 c1 c1 c1 c1 c1 c1 c2 c2 c2 c2 c2 c2 c2 c2 c2 c2 b2 b2 c3 c3 b5 b5 c3 b5 c3 b5 c3 c3 c3 b5 c3 c3 c3 b5 b2 b2 c4 b5 b5 b5 c4 c4 c4 c4 b5 c4 b5 c4 c4 b5 c4 c4 c5 b5 c5 c5 c5 c5 c5 b5 c5 b5 c5 c5 b5 c5 b5 b5 c6 c6 c6 c6 c6 c6 c6 c6 c6 c6 c7 c7 c7 c7 c7 c7 c7 c7 c7 c7 b1 b1 b1 b1 b1 b1 b1 b1 b1 b1 F= F= F= F= F= F= F= F= F= F= b2 b2 b2 b2 b2 b2 b2 b2 b2 b2 b3 b3 b3 b3 b3 b3 b3 b3 b3 b3 b4 b4 b4 b4 b4 b4 b4 b4 b4 b4 b5 b5 b5 b5 b5 b5 b5 b5 b5 b5 LT-Codes: Decoding
Degree Distribution for LT-Codes • Soliton Distribution: • Avg degree H(N) ~ ln(N) • In expectation: Exactly one degree 1 symbol in each round of decoding • Distribution very fragile in practice
Failure-prone Sensor Networks • All earlier works: • How many encoded symbols needed to recover all original symbols (all or nothing decoding) • Failure-prone networks: • How many original symbols can be recovered from given surviving encoded symbols
x5 x2 Recovered Symbols Iterative Decoder x1 x3 x3 x1 x3 x4 x1 Received Symbols x3 x4 • 5 original symbols x1 … x5 • 4 encoded symbols received • Each encoded symbol is XOR of component original symbols
Sensor Network Model • Encoded Symbols remaining: k • Want to maximize “r”, the recovered original data symbols • No idea apriori what k will be
N = 128 Coding is bad, for small k • N original symbols • k encoded symbols received • If k ≤ 0.75N, no coding required
Proof Sketch Theorem: To recover first N/2 symbols, it is best to not do any encoding Proof: • Let C(i, j) = Expected symbols recovered from i degree 1 and j symbols of degree 2 or more. • C(i, j) ≤ C(i+1, j-1) if C(i, j) ≤ N/2 • Sort given symbols in decoding order • All degree 1 symbols will be decoded before other symbols • Last symbol in decoded order will be of degree > 1 (see b.) • Replace this symbol by a random degree 1 symbol • New degree 1 symbol more likely to be useful • Hence, more degree 1 symbols => Better output • No coding is best to recover any first N/2 symbols • All degree 1 => Coupon Collector’s => ≈ 3N/4 symbols to recover N/2 distinct symbols
Ideal Degree Distribution Theorem: To recover r data units such that r < jN/(j+1), the optimal degree distribution has symbols of degree j or less only.
N = 128 Lower degree are better for small k • If k ≤ kj, use symbols of up to degree j • So, use kj – kj-1 degree j symbols in close to optimal distribution
1 2 4 3 nodes exchange symbols Sensor node nodes 2 and 3 transfer new symbols to the sink Sink Case Study: Single-sink Sensor Network Storage
1 2 4 3 Sink Case Study: Single-sink Sensor Network • Network prone to failure • Nodes store unencoded symbols at first and higher degrees with time • Sink receives low degree symbols first and higher degree as time goes on
Distributed SimulationClique Topology • N = 128 nodes in a clique topology • Sink receives one symbol per unit time
Distributed SimulationChain Topology • N = 128 nodes in a chain topology 1 2 3 … N
Related Work • Bulk Data Distribution: Coding is useful • Tornado (Efficient Erasure Correcting Codes by M. Luby et. al., IEEE Transactions on Information Theory, vo. 47, no. 2, 2001) • LT-Codes (LT Codes by M. Luby, FOCS 2002) • Reliable Storage in Sensor Networks • Decentralized erasure code (Ubiquitous Access to Distributed Data in Large-Scale Sensor Networks through Decentralized Erasure Codes by A. Dimakis et. al., IPSN 2005) • Random Linear Coding (“How Good is Random Linear Coding Based Distributed Networked Storage?” by M. Medard et. al., NetCod 2005)