330 likes | 349 Views
SensEye: A Multi-Tier Camera Sensor Network. by Purushottam Kulkarni, Deepak Ganesan, Prashant Shenoy, and Qifeng Lu Presenters: Yen-Chia Chen and Ivan Pechenezhskiy EE225B (March 17, 2011). Cameras and Sensor Platforms. Cameras. Sensor platforms.
E N D
SensEye: A Multi-Tier Camera Sensor Network by Purushottam Kulkarni, Deepak Ganesan, Prashant Shenoy, and Qifeng Lu Presenters: Yen-Chia Chen and Ivan Pechenezhskiy EE225B (March 17, 2011)
Cameras and Sensor Platforms Cameras Sensor platforms Kulkarni et al, In Proc. of ACM NOSSDAV, pages 141–146, 2005.
Previous Work • Power Management • Wakeup-on-wireless & Turducken (always-on) • Multimedia Sensor Network • Panoptes (a video-based single-tier sensor network) • Sensor Placement • Solvable optimization problem • Video Surveillance • Techniques for target detection, classification, and tracking • Systems with central control unit
Motivation • Applications • Environmental monitoring • Ad-hoc surveillance • Constraints • No human interference • Battery-powered deployment
Multi-Tier Sensor Network • Single-Tier Network vs. Multi-Tier Network • reduces power consumption • achieves similar performance • Benefits: • Low cost • High coverage • High reliability • High functionality
SensEye: Multi-Tier Camera Network • Achieve low latencies without sacrificing energy-efficiency • Tasks: object detection, recognition and tracking • Exploits redundancies in camera coverage (e.g. object localization)
General Design Principles • Map each task to the least powerful tier with sufficient resources • Exploit wakeup-on-demand • Exploit redundancy in coverage
System Design—Object Detection • Performed at the most energy-efficient tier (Tier 1) • Detection via frame differencing • Randomized duty-cycling algorithm
System Design—Object Localization Calculation of the vector along which the centroid of an object lies
System Design—Object Localization Involves two rotations and one translation Transformation to the global coordinate frame Triangulation
System Design—Inter-Tier Wakeup • Localization by tier 1 is used to decide which tier 2 nodes to wake up • Wakeup packet to node 2, similar to wake-on-wireless • Reduce the duration of wakeup: Tier 2 runs at bare minimum when suspended
System Design—Recognition and Tracking • Recognition algorithm executed at tier 2 • It is assumed any object recognition algorithm can be employed in SensEye • Tracking involves detection, localization, and inter-tier wakeup
Hardware Architecture Camera Sensors Sensor Platforms
Hardware Architecture • Tier 1: • lower-power camera sensors (Cyclop or CMUcam) • low-power sensor platform (Mote) • Tier 2: • webcams (Logitech) • sensor platform (Intel Stargate), low-power wakeup circuit (Mote) • Tier 3: • high-performance PZT camera and mini-ITX embedded PC (Sony)
Software Architecture (Implemented) • CMUcam Frame Differentiator • Mote-Level Detector • Wakeup Mote • High Resolution Object Detection and Recognition • PTZ Controller
CMUcam Frame Differentiator • CMUcam image capture is triggered by Mote-Level Detector • Detection is achieved by differencing with reference background frame (non-zero areas correspond to object) • Two differencing modes: initial image (88x143 or 176x255) is converted to a 8x8 or 16x16 grid
Mote-Level Detector • Sends initialization commands • Sends sampling signal to CMUcam • Gets the frame difference from CMUcam • Decides whether an event occur • Broadcasts a trigger to the higher tier if an even occur • Sleeps, on no event detection • Duty-cycles CMUcam
Wakeup Mote • Receives Triggers from the lower tier Motes • Computes the coordinates of the detected object • Decides whether to wakeup Stargate
High Resolution Object Detection and Recognition by Stargate • Frame differencing • Image smoothing • Obtaining an average value of the red, green and blue components of the object • Matching against a library of objects
Experimental Evaluation • Component Benchmarks • Latency and Energy Consumption • Localization Accuracy • SensEye vs. Single-Tier Network • Coverage • Energy Usage • Sensing Reliability • Sensitivity to System Parameters
Latency and Energy Consumption • Tier 1: • Cyclope • CMUcam • Tier 2: • webcam
Latency and Energy Consumption • Tier 1: • Cyclope • CMUcam • Tier 2: • webcam 4 sec 4.7 J
Experimental Evaluation: Sensor Placement and Coverage wall 3m x 1.65m • Object appearance time: 7 sec • Interval between appearance: 30 sec • Only one object at any time • 50 object appearances • Tier 1 Motes sampling period: 5 sec
Network Energy Usage (SensEye) ~470 J (Single Tier) ~2900 J
Sensing Reliability • Single-tier system detected 45 out of the 50 objects • SensEye detected 42 (46 with the use of PZT)
Conclusion • A well-design multi-tier camera sensor network might have significant benefits over a single-tier camera network • General principles for multi-tier sensor network design have been proposed • It has been experimentally demonstrated that a multi-tier network can achieve about an order of magnitude reduction in energy usage without sacrificing reliability
Power Management • wake-on-wireless • Separation of the control channel and the data channel • Incoming radio signal to wake up power-off devices • Turducken • Multi-tier structure that uses a lower tier to wake up a higher tier
Multimedia Sensor Network • Panoptes • Video-based sensor network • Single-tier, similar to tier 2 in SensEye • Incorporates compression, buffering and filtering (can be used by tier 2)