470 likes | 593 Views
Scalable Distributed Networked Sensing. Richard Baraniuk Rice University. The Digital Universe. Size: 281 billion gigabytes generated in 2007 digital bits > stars in the universe growing by a factor of 10 every 5 years > Avogadro’s number (6.02x10 23 ) in 15 years
E N D
ScalableDistributed Networked Sensing Richard Baraniuk Rice University
The Digital Universe • Size: 281 billion gigabytes generated in 2007 digital bits > stars in the universe growing by a factor of 10 every 5 years > Avogadro’s number (6.02x1023) in 15 years • Growth fueled by multimedia data audio, images, video, surveillance cameras, sensor nets, … • In 2007 digital data generated > total storage by 2011, ½ of digital universe will have no home [Source: IDC Whitepaper “The Diverse and Exploding Digital Universe” March 2008]
Networked Sensing Goals • sense • communicate • fuse • infer (detect, recognize, etc.) • predict • actuate/navigate UAVs networkinfrastructure UGVs humanintelligence
Networked Sensing Challenges • growing volumes of sensor data • increasingly diverse data • diverse and changing operating conditions • increasing mobility UAVs networkinfrastructure UGVs humanintelligence
Research Challenges • Shear amount of data that must be acquired and communicated J sensors N samples/pixels per sensor • Amount of data grows as O(JN) • leads to communication and computation collapse • Must fuse diverse data types
Our Approach • Re-think the sensing and data processing pipeline • New data representation: random encoding • preserves info in a wide range of data types • acts as source/channel fountain code • supports efficient processing and inference algorithms • supports efficient fusion from multiple sensors • supports a range of actuation/navigation strategies • scalable in resolution N and number of sensors J
Today’s Data Pipeline • based onShannon-Nyquist theory (sample 2x faster than the signal BW) • wide-band signals require high-rate sampling • ADC performanceFOM doubles every 6-8 years analog world ADC digital processing info • compression • detection • classification • estimation • tracking …
analog world ADC digital processing info IDFT signal Fourier coefficients
analog world ADC digital processing info IDFT digitalmsmnts sampling operator signal
analog world ADC digital processing info • Sampling rate determined by bandwidth of • But in many applications, is sparse IDFT digitalmsmnts sampling operator signal
Sparsity largewaveletcoefficients (blue = 0) largeGabor (TF)coefficients pixels widebandsignalsamples frequency time
Sparsity • Communications: large spectral bandwidth but small information rate (spread spectrum) • Sensor arrays: large number of sensors but small number of emitters • Wide-field imaging: large surveillance area but small number of targets • Key (recent) mathematical fact:Sparse signals support dimensionality reduction(sub-Nyquist sampling)
analog world ADC digital processing info IDFT digitalmsmnts sampling operator signal
analog world CS-ADC digital processing info • Dimensionality reduction (compressive sensing, CS) • Can preserve all information in sparse in • Can recover from IDFT digitalmsmnts sampling operator signal
analog world CS-ADC digital processing info • Can preserve all information in sparse in • Natural to design “random sampling” systems IDFT digitalmsmnts sampling operator signal
analog world CS-ADC digital processing info N = Nyquist BW of K = information rateex: number of active tones IDFT digitalmsmnts sampling operator signal Sampling rate: M = O(K log N)
analog world CS-ADC digital processing info • Reduces demands on: • hardware • processing algorithms • communication network IDFT digitalmsmnts sampling operator signal Sampling rate: M = O(K log N)
Single-Pixel Camera scene single photon detector imagereconstructionorprocessing DMD DMD random pattern on DMD array w/ Kevin Kelly
Example Image Acquisition target 65536 pixels 1300 measurements (2%) 11000 measurements (16%)
Scalable Decision Making • Many applications involve signal inferencedetection < classification < estimation < reconstruction • Good news: RE supports efficient learning, inference, processing directly on compressive measurements • Random projections ~ sufficient statisticsfor signals with concise geometrical structure
Matched Filter • Detection/classification with L unknown articulation parameters • Ex: position and pose of a vehicle in an image • Ex: time delay of a radar signal return • Matched filter: joint parameter estimation and detection/classification • compute sufficient statistic for each potential target and articulation • compare “best” statistics to detect/classify
Matched Filter Geometry data • Detection/classification with L unknown articulation parameters • Images are points in • Classify by finding closesttarget template to datafor each class (AWG noise) • distance or inner product target templatesfromgenerative modelor training data (points)
Matched Filter Geometry data • Detection/classification with L unknown articulation parameters • Images are points in • Classify by finding closesttarget template to data • As template articulationparameter changes, points map out a L-dimnonlinear manifold • Matched filter classification = closest manifold search articulation parameter space
CS for Manifolds • Theorem: random measurements stably embed manifoldwhp [B, Wakin, FOCM ’08]related work:[Indyk and Naor, Agarwal et al., Dasgupta and Freund] • Stable embedding • Proved via concentration inequality arguments(JLL/CS relation)
CS for Manifolds • Theorem: random measurements stably embed manifoldwhp • Enables parameter estimation and MFdetection/classificationdirectly on compressivemeasurements • L very small in many applications (# articulations)
Smashed Filter • Geometry of the matched filtering process amenable to dimensionality reduction (manifold) • Can implement matched filter directly on compressive measurements
Smashed Filter • Random shift and rotation (L=3 dim. manifold) • Noise added to measurements • Goal: identify most likely position for each image class identify most likely class using nearest-neighbor test more noise classification rate (%) avg. shift estimate error more noise number of measurements M number of measurements M
Multisensor Inference • Example: Network of J cameras observing an articulating object • Each camera’s images lie on L-dim manifold in • How to efficiently fuse imagery from J cameras to solve an inference problem while minimizing network communication?
Multisensor Fusion • Fusion: stack corresponding image vectors taken at the same time • Fused images still lie on L-dim manifold in“joint manifold”
Multisensor Fusion via JM+RE • Can take random RE measurements of stacked images and process or make inferences w/ unfused RE w/ unfused and no RE
Multisensor Fusion via JM+RE • Can compute RE measurements in-network • ex: as we transmit to collection/processing point
Simulation Results • J=3 CS cameras, each N=320x240 resolution • M=200 random measurements per camera • Two classes • truck w/ cargo • truck w/ no cargo • Goal: classify a test image class 1 class 2
Simulation Results • J=3 CS cameras, each N=320x240 resolution • M=200 random measurements per camera • Two classes • truck w/ cargo • truck w/ no cargo • Smashed filtering • independent • majority vote • joint manifold Joint Manifold
Scalable Communication • RE is democratic • each measurement carries the same amount of information • robust to measurement loss and quantization simple encoding • Ex: wireless streaming application with data loss • conventional: complicated (unequal) error protection of compressed data • DCT/wavelet low frequency coefficients • RE: merely stream additional measurements and reconstruct using those that arrive safely (fountain-like) • Joint manifold fusion supports in-network computation
Scalable Navigation • Manifold-based pattern recognition supports adaptive navigation (to improve recognition performance) • Both manifolds and navigation strategies preserved by random encoding
Summary • Scalable distributed sensing requires a re-think of the entire sensing and data processing pipeline • New data representation: random encoding • preserves info in a wide range of data types • acts as source/channel fountain codes • supports efficient processing and inference algorithms • supports efficient fusion from multiple sensors • supports a range of actuation/navigation strategies • scalable in resolution N and number of sensors J dsp.rice.edu/cs
Manifold Learning via Joint Manifolds • Goal: Learn embeddingof 2D translating ellipse(with noise) N=45x45=2025 pixelsJ=20 views at different angles
Manifold Learning via Joint Manifolds • Goal: Learn embeddingof 2D translating ellipse(with noise) N=45x45=2025 pixelsJ=20 views • Embeddingslearnedseparately
Manifold Learning via Joint Manifolds • Goal: Learn embeddingof 2D translating ellipse(with noise) N=45x45=2025 pixelsJ=20 views • Embeddingslearnedseparately • Embedding learned jointly
Manifold Learning via JM+RE • Goal:Learn embeddingvia random compressivemeasurements N=45x45=2025 pixels J=20 views • Embeddingslearnedseparately • Embedding learned jointly M=100 measurements per view