40 likes | 65 Views
Explore the need for sensor fusion to improve accuracy in virtual sensor readings through approximate inference techniques like Tree-Reweighted Loopy Belief Propagation. Develop a distributed algorithm with constant-size messages to handle poor sensor readings and graphical models. Evaluate scheduling schemes' impact on reading quality versus message count using a functioning simulator for empirical exploration. Plan to apply findings on motes for practical deployment.
E N D
StatSense In-Network Probabilistic Inference over Sensor Networks Presented by: Jeremy Schiff
Motivation and Problem Formulation • Sensor Readings are inaccurate • Sensor Fusion can improve “virtual readings” • Exact Inference has too many messages • Exponential in size of graphical model • Input: Poor sensor readings + graphical model • Output: Good “virtual readings”
Key Ideas of Your Solution • Perform approximate inference • Tree-Reweighted Loopy Belief Propagation • Distributed message passing algorithm • Constant size messages • Improve results from no inference • Need to deal with node churn and asymmetric links • Which scheduling works best? • Quality of Reading vs. Number of Messages
Current Status and Future Plans • Functioning simulator • Performs Belief Propagation • Simulates dead nodes • Simulates asymmetric links • Simulates different scheduling schemes • Allows empirical exploration of performance • Approximation has limited theoretical guarantees • Plan to run this on motes