390 likes | 478 Views
Distributed network signal processing. Sensorwebs group. http://basics.eecs.berkeley.edu/sensorwebs. Kannan Ramchandran ( Pister, Sastry,Anantharam,Jordan,Malik) Electrical Engineering and Computer Science University of California at Berkeley. kannanr@eecs.berkeley.edu
E N D
Distributed network signal processing Sensorwebs group http://basics.eecs.berkeley.edu/sensorwebs Kannan Ramchandran (Pister, Sastry,Anantharam,Jordan,Malik)Electrical Engineering and Computer Science University of California at Berkeley kannanr@eecs.berkeley.edu http://www.eecs.berkeley.edu/~kannanr
DARPA Sensorwebs: • Creation of a fundamental unifying framework for real-time distributed/ decentralized information processing with applications to Sensor Webs, consisting of: • MEMS (Pister) • Distributed SP (Ramchandran) • Distributed Control (Sastry) • “Real-time” Information Theory (Anantharam) • Distributed Learning Theory (Jordan) University of California, Berkeley
Dense low-power sensor network attributes • Dense clustering of sensors/ embedded devices • Highly correlated but spatially distributed data • Limited system resources: energy, bandwidth • Unreliable system components • Wireless medium: dynamic SNR/ interference • End-goal is key: tracking, detection, inference Disaster Management University of California, Berkeley
Signal processing & comm. system challenges: • Distributed & scalable multi-terminal architectures for: • coding, clustering, tracking, estimation, detection; • Distributed sensor fusion based on statistical sensor data models; • Integration of layers in network stack: • joint source-network coding • joint coding/routing • Reliability through diversity in representation & transmission • Energy optimization: computation vs. transmission cost: • ~100 nJ/bit vs. 1 pJ/inst. (HW) & 1 nJ/inst (SW) University of California, Berkeley
Roadmap • Distributed Compression: basics, new results • Networking aspects: packet aggregation • Reliability through diversity: multiple descriptions • Distributed multimedia streaming: robust, scalable architecture University of California, Berkeley
Real-world scenario: Blackouts project • Near Real-time room condition monitoring using sensor motes in Cory Hall (Berkeley campus): data goes online. • All sensors periodically route readings to central (yellow) node. • Strong multiplication of redundancy due to topology. • http://blackouts.eecs.berkeley.edu University of California, Berkeley
Distributed compression: basic ideas • Suppose X, Y correlated • Y available at decoder but not at encoder • How to compress X close to H(X|Y)? • Key idea: discount I(X;Y).H(X|Y) = H(X) – I(X;Y) Y X University of California, Berkeley
Information-theory: binning argument Slepian-Wolf (’72) • Make a main codebook of all typical sequences. 2nH(X) and 2nH(Y) elements. • Partition into 2nH(X|Y). • When observe Xn, transmit index of bin it belongs to • Decoder finds member of bin that is jointly typical with Yn. • Can extend to “symmetric cases” X 6182-13ihronvqanv83-4vnq-ren Hqigofednv3q4nvqrnvqwnv0=r Nkqlveno3[nv34=3nv=3w4nvi3 Nklqenv3=349i3wvn=3qwpvnv Inhgvvvo3=vn3=nv3=vnv=wvc 6182-13ihronvqanv83-4vnq-ren Hqigofednv3q4nvqrnvqwnv0=r Nkqlveno3[nv34=3nv=3w4nvi3 Nklqenv3=349i3wvn=3qwpvnv Inhgvvvo3=vn3=nv3=vnv=wvc 6182-13ihronvqanv83-4vnq-ren Hqigofednv3q4nvqrnvqwnv0=r Nkqlveno3[nv34=3nv=3w4nvi3 Nklqenv3=349i3wvn=3qwpvnv Inhgvvvo3=vn3=nv3=vnv=wvc 6182-13ihronvqanv83-4vnq-ren Hqigofednv3q4nvqrnvqwnv0=r Nkqlveno3[nv34=3nv=3w4nvi3 Nklqenv3=349i3wvn=3qwpvnv Inhgvvvo3=vn3=nv3=vnv=wvc University of California, Berkeley
X 6182-13ihronvqanv83-4vnq-ren Hqigofednv3q4nvqrnvqwnv0=r Nkqlveno3[nv34=3nv=3w4nvi3 Nklqenv3=349i3wvn=3qwpvnv Inhgvvvo3=vn3=nv3=vnv=wvc 6182-13ihronvqanv83-4vnq-ren Hqigofednv3q4nvqrnvqwnv0=r Nkqlveno3[nv34=3nv=3w4nvi3 Nklqenv3=349i3wvn=3qwpvnv Inhgvvvo3=vn3=nv3=vnv=wvc 6182-13ihronvqanv83-4vnq-ren Hqigofednv3q4nvqrnvqwnv0=r Nkqlveno3[nv34=3nv=3w4nvi3 Nklqenv3=349i3wvn=3qwpvnv Inhgvvvo3=vn3=nv3=vnv=wvc 2nI(X,Y) Product of bin sizes > 6182-13ihronvqanv83-4vnq-ren Hqigofednv3q4nvqrnvqwnv0=r Nkqlveno3[nv34=3nv=3w4nvi3 Nklqenv3=349i3wvn=3qwpvnv Inhgvvvo3=vn3=nv3=vnv=wvc 6182-13ihronvqanv83-4vnq-ren Hqigofednv3q4nvqrnvqwnv0=r Nkqlveno3[nv34=3nv=3w4nvi3 Nklqenv3=349i3wvn=3qwpvnv Inhgvvvo3=vn3=nv3=vnv=wvc Y 6182-13ihronvqanv83-4vnq-ren Hqigofednv3q4nvqrnvqwnv0=r Nkqlveno3[nv34=3nv=3w4nvi3 Nklqenv3=349i3wvn=3qwpvnv Inhgvvvo3=vn3=nv3=vnv=wvc 6182-13ihronvqanv83-4vnq-ren Hqigofednv3q4nvqrnvqwnv0=r Nkqlveno3[nv34=3nv=3w4nvi3 Nklqenv3=349i3wvn=3qwpvnv Inhgvvvo3=vn3=nv3=vnv=wvc 6182-13ihronvqanv83-4vnq-ren Hqigofednv3q4nvqrnvqwnv0=r Nkqlveno3[nv34=3nv=3w4nvi3 Nklqenv3=349i3wvn=3qwpvnv Inhgvvvo3=vn3=nv3=vnv=wvc Symmetric case: joint binning • Rate limited by: Rx H(X|Y)RY H(Y|X)Rx +RY H(X,Y) H(X,Y) H(Y|X) H(X|Y) University of California, Berkeley
System 1 X Encoder Decoder • X and Y correlated • Y at encoder and decoder Y 0 0 0 0 0 1 0 1 0 1 0 0 Need 2 bits to index this. X+Y= Simple binary example • X and Y => length-3 binary data (equally likely), • Correlation: Hamming distance between X and Y is at most 1. Example: When X=[0 1 0], Y => [0 1 0], [0 1 1], [0 0 0], [1 1 0]. University of California, Berkeley
Encoder Decoder Y 0 0 0 1 1 1 000 001 010 100 111 110 101 011 Coset-1 X System 2 X • X and Y correlated • Y at encoder Y • What is the best that one can do? • The answer is still 2 bits! How? University of California, Berkeley
Coset-2 Coset-1 Coset-3 Coset-4 • Encoder -> index of the coset containing X. • Decoder reconstructs X in given coset. • Note: • Coset-1 -> repetition code. • Each coset -> unique “syndrome” • DIstributed Source Coding Using Syndromes (DISCUS) University of California, Berkeley
General block diagram of DISCUS DISCUS: a constructive approach to distr. compression • Intricate interplay between source coding, channel coding and estimation theory: can leverage latest advances in all areas. • 7-15 dBgains in reconstruction SNR over theoretically optimal strategies that ignore correlation for typical correlated sources. • Applications to digital upgrade of analog radio/television. Decoder Encoder Source X Find quantization index using source codebook Compute syndrome of quantized codeword I I Find codeword closest to Y in coset U Optimally estimate source U ˆ X Correlated source Y University of California, Berkeley
Y X 0 1 2 3 4 5 6 7 A B C D A B C D Continuous case: practical quantizer design issues • Consider the following coset example: 8-level scalar quantizer Source Side-information We have compressedfrom 3 bits to 2 bits • Difference at most 1 cell. • Send only index of “coset”: A,B,C,D • Decoder decides which member of coset is the correct answer University of California, Berkeley
X -2d* -d* 0 d* 2d* X X -d*/2 d*/2 Optimizing quantizer and rate • Important note: decoder can’t differentiate bet. x and x+d* (ABCD) • Therefore: must combine statistics of members of bins • Use PDF periodization: repeat PDFs using parameter d*. • Design using f’x(x) ABCD ABCDABCDABCD University of California, Berkeley
Caveats: choice of d* • If too small: high coset error • If too large: high quantization error 0 3d* 4d* d* 2d* f(x) X 0 d* 2d* f(x) X Kusuma & Ramchandran ‘01 University of California, Berkeley
Dynamic bit allocation • Consider iterative method: assign one bit at a time • Can either:improve quantizationimprove code performance • Iteratively assign using rules of thumb. • Multiple levels of protection Most significant index Not transmitted Protectionneeded Send syndrome Full index sent Not transmitted Least significant index University of California, Berkeley
For example • Increase quantization (need more protection too!) OR • Increase code performance Most significant index Not transmitted Protectionneeded Send syndrome Full index sent Not transmitted Least significant index University of California, Berkeley
Y X New results: distributed lossy compression: (Pradhan & Ramchandran ’01) • Suppose X, Y correlated as X=Y+N • Wyner-Ziv (’78): No theoretical performance loss due to distributed processing (no X-Y communication) if X and Y are jointly Gaussian. • New results (Pradhan, Chou & Ramchandran ’01):: • No performance loss due to distributed processing forarbitraryX, Y if N is Gaussian. • Fundamental duality between distributed coding and data- hiding (encoder/decoder functions can be swapped!) University of California, Berkeley
^ X Yi=X+Ni Y1 rate R1 Scene X Y2 rate R2 Distributed sensor fusion under bandwidth constraints: • Suboptimal to form E(X|Yi) as in single-sensor case • Optimal distributed strategy for Gaussian case: • Compress Yi’s without estimating X individually. • Exploit correlation structure to reduce transmitted bit rate. • DISCUS multisensor fusion under BW constraints. University of California, Berkeley
Enabling DISCUS for sensor networks • Use clustering to enable network deployment (hierarchies) • Learn correlation structure (training based or dynamically) and optimize quantizer and code: • Good news: no need for motes to be aware of clustering • Elect a “cluster leader” – can swap periodically. • Localization increases robustness to changing correlation structure (everything is relative to leader). University of California, Berkeley
011 1 1 A B 010 2 2 4 3 4 3 110 000 D C B A 010 101 100 011 000 111 001 110 • Gateway node 1 first • decodes node 2 • It then recursively • decodes nodes 3,4 C A B If each link ~ 1 m, network does 15 bit-meter work w/o DISCUS With DISCUS, network does only 10 bit-meter work. University of California, Berkeley
011 1 1 Δ1 Δ2 010 2 2 4 3 4 3 110 000 C D B A 010 101 100 011 000 111 001 110 • Gateway node 2 decodes nodes 3,4 • Node 2 sends the deltas w.r.t. 3,4 as well as its own syndrome C A B If each link ~ 1 m, network does 15 bit-meter work w/o DISCUS With DISCUS, network does only 10 bit-meter work. Load-balance in computation University of California, Berkeley
Where should aggregation be done? 011 1 2 010 4 3 110 000 Node 2 collects all the data: nodes 1,3,4 send 2-bit syndromes Total work done by network down to 6 bit-meters! University of California, Berkeley
Network deployment: aggregation (Picoradio project BWRC) • Sensor nodes from region of interest (ROI) reply to query • ROI sends out single aggregate packet Controller Sensors Border Node University of California, Berkeley
1 J/bit Dest Source 1.1 J/bit Local Rule (0.75*10) + (0.25*30) = 15 nJ 10 nJ p1 = 0.75 Source 0.3 30 nJ p2 = 0.25 Dest. 0.6 0.7 1.0 0.4 1.0 Integration into routing protocols • Traditional way: find the best route and use it always • Probabilistic path selection is superior • Can incorporate data correlation structure into path weights Data propagation University of California, Berkeley
A A A A B B Information A Information B A+B A+B Network Coding: the case for “smart motes” • “Store and forward” philosophy of current packet routers can be inefficient • “Smart motes” can significantly decrease system energy requirements: University of California, Berkeley
Distributed Media Streaming from Multiple Servers: new paradigm • Client is served by multiple servers • Advantages: • Robust to link/server failure (useful in battlefield!) • Load balancing Server 1 Client 1 ScalableMedia Source Server 2 Client 2 Server 3 University of California, Berkeley
X1 Side Decoder 1 Description 1 X0 Central Decoder Side Decoder 2 X2 Description 2 Robust transmission: the Multiple Descriptions Problem X MD Encoder X0 < X1, X2 Distortion X1 = X2 • Multiple levels of quality delivered to the destination. (N+1 • levels for the N-channel case) University of California, Berkeley
Emerging Multimedia Compression Standards • Multi-resolution (MR) source coding e.g, JPEG-2000 (wavelets), MPEG-4. • Bit stream arranged in importance layers (progressive) University of California, Berkeley
Channel Encoding Channel Decoding Transmission Source A subset Source Packets A Review ofErasure Codes • Erasure Codes (n, k, d) : recovery from reception of partial data. • n = block length, k= log (# of code words), correct (d-1) erasures • (n,k) Maximum Distance Separable (MDS) Codes: d = n – k + 1 • MDS => anyk channel symbols => k source symbols. University of California, Berkeley
. . . . . . . . 1 . . . . 2 . . . . ... 3 . . . . R N R 1 2 R N • MD-FEC (Multiple Descriptions through Forward Error Correction Codes) packet • stream insensitive to ‘position’ of loss. • MD-FEC rate markers can be optimized to dynamically adapt to both • instantaneous channel conditions and the source content. (Puri & Ramchandran ’00) Robust Source Coding University of California, Berkeley
Outline of Solution (Single Receiver Case) • Use a progressive bit stream to ensure graceful degradation. • Find the loss rates and total bandwidth from each server to the client and calculate the “net” loss rate and bandwidth to the client. • Apply the “MD-FEC” framework now that the problem is reduced to a point-to-point problem. University of California, Berkeley
Distributed streaming paradigm: end-to-end system architecture progressively coded video stream MR Source Encoder Camera raw video stream MD-FEC Transcoder 2 MD-FEC Transcoder 1 channel state 1 MD video stream 1 MD video stream 2 channel state 2 MD-FEC Transcoder m feedback Network MD video stream m Receiver channel state m University of California, Berkeley
X1 , X2 , X1 , X3 X3 X2 X1 , X2 , X3 “Robustified” distributed compression X1 G12 F1 X2 Network G13 F2 X3 G23 F3 Each packet has R bits/sample G123 • Consider symmetric case: H(Xi)=H1, H(Xi,Xj)=H2, H(Xi,Xj,Xk)=H3: • R=H3/3 Fully distributed, maximally compressed, not robust to link loss; • R=H2/2 Fully distributed, minimally redundant, robust to any one link loss. University of California, Berkeley
Future challenges: • Integrate “distributed learning” aspect into framework • Extend to arbitrary correlation structures • Incorporate accurate statistical sensor models: • Wavelet mixture models for audio-visual data • Retain end-goal while optimizing system components: • e.g.: estimation, detection, tracking, routing, transmission; • impose bandwidth/energy/computational constraints • Progress on network information theory & constructive algorithms • Extend theory/algorithms for incorporating robustness/reliability • Target specific application scenarios of interest. University of California, Berkeley
1-D Vehicle tracking • For each vehicle, there are two parameters: • t0 – the time the vehicle passes through point p = 0 • v – the speed of the vehicle (assume constant velocity) • Node i at position pi sees the vehicle at time ti: • ti = t0 + (1/v)pi • Combining all nodes, Ax = b with: x = (ATA)-1ATb Matrix inversion is only a 2x2! University of California, Berkeley
Update Node Positions Once we calculate v, go back and make a new guess at each pi ti = (1/v)pinew + t0 pinew = (ti-t0)v Update according to some non-catastrophic weighted rule like: Better Results As time progresses Make Initial guess For pi’s Detect Vehicle (fix pi’s) Update Positions (fix t0, v) University of California, Berkeley
Dynamic Data Fusion Use a node-pixel analogy to exploit algorithms from computer vision. Each sensor reading is akin to a pixel intensity at some (x,y) location. By interpolating node positions to regular grid points, standard differentiation techniques are used to determine the direction of flow. This can be done in a distributed fashion. Left: Chemical plume is tracked through network University of California, Berkeley