330 likes | 454 Views
Dr. Sudharman K. Jayaweera and Amila Kariyapperuma ECE Department University of New Mexico. Distributed & adaptive Data compression in wireless Sensor Networks. Ankur Sharma Department of ECE Indian Institute of Technology, Roorkee. 5 th July,2007.
E N D
Dr. Sudharman K. Jayaweera and AmilaKariyapperuma ECE Department University of New Mexico Distributed & adaptive Data compression in wireless Sensor Networks Ankur Sharma Department of ECE Indian Institute of Technology, Roorkee 5th July,2007 Expand Your Engineering Skills (EYES), Summer Internship Program, 2007
Introduction • Wireless Sensor Networks (WSN) consist of nodes for sensing • Temperature • Pressure • Light • Magnetometer • Infrared • Audio/Video etc • Ad hoc WSN may require inter-sensor communication.
Problem • Nodes are • of small physical dimensions • Battery operated • Major concern is energy consumption • Failure of nodes due to energy depletion can lead to • Partition of sensor network • Loss of critical information • Requirement of application/system is that every node should know the data of each other node.
Related Work • Energy aware routing & efficient information processing. [Shah and Rabaey, 2002] • Local compression & probabilistic estimation schemes. [ Luo,2005] • Distributed compression & adaptive signal processing in sensor networks with a fusion center. [ Chou, 2003]
Our Approach i bit 1 2 i bit i bit i bit i bit i bit 4 3
Proposed Algorithm • Sensor jpredicts its own reading, depending upon its past readings and readings from other sensors. • Depending upon error between predicted value and actual value i.e. sensor jcalculates the compressed bits iusing • Chebyshev’sinequality method • Exact error method
Code Construction • A codebook to encode data X to i bits. • One underlying codebook that is NOT changed among the sensors. • Supports multiple compression rates.
A Tree-based Codebook 0 1 0 1 0 1
Chebyshev’s Inequality Method • To prevent decoding errors with i bits • Chebyshev bound for probability of decoding error • Required value of Value of i:
Exact Error Method • To prevent decoding errors using i bits • Aswe know exact error in the prediction of sensor data X, number of bits are • Send extra bits also, specifying the number of bits in the message.
Encoder Sensors • Xis stored as the closest representation from 2n values in the root codebook (A/D converter). • Mapping from X to the bits that specify the subcode-book at level i is done using
Decoder Sensors • Decoders receive i-bit value & code sequence f(x). • Traverse the tree starting from LSB of code sequence to find appropriate subcode book, S. • Calculates the side information Y as • Decodes the side information Y, to the closest value in S as
Correlation Tracking • Linear prediction method • Analytically tractable • Optimal when readings can be modeled as i.i.d.Gaussian random variables. • First sensor always sends its data compressed w.r.t. its own past data. • Prediction of X is where
Least-Squares Parameter Estimation • Prediction error is • Choose filter coefficients in order to minimize weighted least squares error. • Least squares filter coefficient vector at time k is given by where
Recursive Least-Squares (RLS) Algorithm • Filter coefficient computation is performed adaptively using RLS where and • For initialization, each sensor sends uncodeddata samples. • In our approach reference sensor updates the corresponding coefficients and sends them to all other sensors.
Decoding Errors • No decoding errors in exact error method. • In Chebyshev’s method, no of encoding bits are specified within a given probability of error and after every 100 samples. • Leads to few decoding errors, but results in higher compression.
Implementation & Performance • Simulations were performed for measurements on humidity data. • We assumed a 12 bit A/D converter with a dynamic range of [-128,128]. • Simulated results for about 18,000 samples for each sensor (total of 90,000) • Sensor orderings are randomized every 500 samples. • For RLS training, first 25 samples of each sensor are transmitted without any compression. • Coefficients are updated and shared after every 500 samples.
Exact Error implementation • With each code sequence, extra 4 bits to specify the number of bits are also sent. • Decoding Error = 0 • Average Energy Saving %= 43.34%
Chebyshev’s Inequality method • Encoding bits are specified every 100 samples • Case I: Probability of Error ( Pe)= 0.5% • Average Decoding Error % = 0.07% • Average Energy Saving % = 45.74%
Chebyshev’s Inequality method • Case II: Probability of Error ( Pe)= 1.0% • Average Decoding Error % = 0.13% • Average Energy Saving % = 49.74%
Chebyshev’s Inequality method • Case II: Probability of Error ( Pe)= 1.5% • Average Decoding Error % = 2.29% • Average Energy Saving % = 52.27%
Comparison • ZERO probability of decoding error • Compression is low (due to extra bit information) • Strict bound • ‘Instantaneous approach’ • Probability of decoding error within a required bound. • Higher Compression can be achieved by varying required probability of error. • Loose bound • ‘Average approach’. Exact Error Method Chebyshev’s Method
For Temperature Data • Exact error method • Average energy savings % = 56.66% • Average decoding error % = 0 • Chebyshev’s method ( Pe= 0.01) • Average energy savings % = 66.98% • Average decoding error % = 0.61%
For Light Data • Exact error method • Average energy savings % = 33.52% • Average decoding error % = 0 • Chebyshev’s method ( Pe= 0.01) • Average energy savings % = 19.29% • Average decoding error % = 1.13%
Conclusions • Energy savings achieved through our simulations are conservative estimates of what can be achieved in practice. • Further work can be done on • Better predictive models. • Better probability of error bound. • Can be integrated with an energy saving-routing algorithm to increase the energy savings.
Thank You!!!! Queries Please…..