260 likes | 280 Views
Scalable Data Aggregation for Dynamic Events in Sensor Networks. Kai-Wei Fan http://www.cse.ohio-state.edu/~fank Authors: Kai-Wei Fan, Sha Liu, and Prasun Sinha Dept of Computer Science and Engineering The Ohio State University. Wireless Sensors. Genesis of Wireless Sensors
E N D
Scalable Data Aggregation for Dynamic Events in Sensor Networks Kai-Wei Fan http://www.cse.ohio-state.edu/~fank Authors: Kai-Wei Fan, Sha Liu, and Prasun Sinha Dept of Computer Science and Engineering The Ohio State University
Wireless Sensors • Genesis of Wireless Sensors • Miniaturization of sensing devices and actuators • Miniaturization of computing platforms • Miniaturization of wireless component • Applications • Data Collection Networks • Environment Monitoring, Habitat Monitoring • Event Triggered Networks (focus of this work) • Military Applications, National Asset Protection • Challenges • Battery power • Limited bandwidth Berkeley MicaDot
Data Aggregation • Motivation • Communication cost is higher than computation cost • In-network processing reduces number/size of packets • Challenges • Rare & dynamic events • Protocol must use low energy for long network lifetime • Related Work • Static Structures • Dynamic Structures • Structure-Free
Data Aggregation ApproachesStatic Structure • Routing on a pre-computed structure • Suitable for unchanging traffic pattern • Inappropriate for dynamic event • Long link stretch – avg / worst: O(log n) / O(n)[Alon et al., SIAM 95] • [LEACH, TWC ’02], [PEGASIS, TPDS ’02], [GIST, DCOSS ’06], SMT, MST…
Data Aggregation ApproachesDynamic Structure • Create a structure dynamically • Optimization for a subset of nodes • High control overhead for dynamic events • [Directed Diffusion, Mobicom ‘00], [GIT, ICDCS ’02],[DCTC, Infocom ‘04]
Data Aggregation ApproachesStructure-Free • Improve aggregation without any structure • Suitable for dynamic event scenarios • No guarantee of aggregation for allpackets • [DAA, Infocom ’06]
Our Proposed Approach:Tree on Directed Acyclic Graph • Combine benefits of structured and structure-free approaches • Properties • Structure-free data aggregation • Packet forwarding on an implicit structure • Guarantee early aggregation irrespective of network size • Advantages • Low overhead of structure construction & maintenance • Suitable for dynamic event scenarios • Scalable in large scale sensor networks
ToD - Tree on DAG • One-Dimension illustration • Definition • Cell: Cell size is the maximum diameter of events • F-cluster: First-level Cluster. Composed of multiple cells • S-cluster: Second-level Cluster. Composed of multiple cells • Interleaved with F-clusters Cell F-cluster S-cluster …… one row instance of the network …… …………………… …………………… network
sink F-cluster-head sink F-clusters sink S-cluster-head S-cluster ToD - Tree on DAG
sink sink Dynamic Forwarding • Rule 0: forward packets to F-cluster-head by structure-free data aggregation protocol [Infocom ’06] • Rule 1: event spans two cells, forward to sink • Rule 2: event spans one cell, forward to S-cluster-head
Two-Dimension ToD Construction A1 A2 B1 B2 C1 C2 C3 C4 A3 A4 B3 B4 S1 S2 D1 D2 E1 E2 F1 F2 A B C D3 D4 E3 E4 F3 F4 S3 S4 D E F G1 G2 H1 H2 I1 I2 G H I G3 G4 H3 H4 I3 I4 2Δ 2Δ 2Δ F-Clusters Cells S-Clusters Δ: Maximum Diameter of an event
Cluster-head Selection • Assumptions • Each node knows all nodes and their locations in its F-cluster • Time synchronization – Low precision. • Approach • Sort list of nodes based on node id: N • Hash current time to a node in the F-cluster • F-cluster = N[k] where k = H(current time); • F-cluster-heads play the role of S-cluster-heads • Benefits • No cluster-head election/update overhead • Local synchronization – sync only within an F-cluster
Sharing cluster-head F-cluster-head also takes the role of S-cluster-head Benefits Avoids maintenance of S-cluster-heads Nodes only need to know the F-cluster-head in their F-cluster Illustration Assume sink is at bottom left corner Dynamic Forwarding: Aggregating Cluster S-cluster S-clusterhead F-clusterhead F-cluster F-cluster & S-clusterhead F-cluster, aggregating cluster for the S-cluster
Dynamic Forwarding Rules • Nodes send data to their F-cluster-head • F-cluster-head forwards data to one/two S-cluster-heads • depends on which cells sent data to F-cluster-head • only need to consider packets from one or two cells • Guarantee aggregation in constant number of steps • independent of network size
Dynamic Forwarding: ExampleOne cell scenario S-cluster Aggregating Cluster
Dynamic Forwarding: ExampleTwo cells scenario S-cluster (S1) Aggregating Cluster for S1 S-cluster (S2) Aggregating Cluster for S2
Experimental Results • Evaluated Protocols • ToD • Data Aware Anycast (DAA) (includes RW) • Shortest Path Tree (SPT) • SPT with Delay (SPT-D) • Testbed Configuration • 105 Mica2-based motes • 15 * 7 grid network • TX Range: 2 grid-neighbor (max 12 neighbors) • Evaluated Metric • Normalized Number of Transmissions • Parameters • Maximum Delay • ToD, DAA, SPT-D • Event Size
Experiment Results - Delay • All nodes are sources • Data rate: 0.1 pkt/s • Data payload: 20 bytes • 2 F-clusters in ToD • Key observations • ToD performs better than DAA • SPT-D is sensitive to the delay
Experiment Results – Event Size • 12 ~ 78 sources • Data rate: 0.1 pkt/s • Data payload: 20 bytes • SPT-D delay: 6s • Key observations • ToD performs best • High variation of SPT-D: Long stretch problem
Evaluated Protocols ToD Data Aware Anycast (DAA) Shortest Path Tree (SPT) Optimal Aggregation Tree (OPT) Evaluated Metric Normalized Number of Transmissions Parameters Event Size Network Size Cell Size Simulation Results
Simulation Results – Event Size • 2000m X 1200m(35 X 58 grid network) • TX Range: 50m (8 neighbors) • Event moves at 10m/s • Data rate: 0.2 pkt/s • Data payload: 50 bytes • Key Observations • TOD performs close to OPT
Vary the distance from the event to sink: 400 ~ 1600m Key Observations SPT & DAA performance goes down with distance ToD & OPT remain steady Simulation Results – Network Size 2000m 1200m 400m
Event Size: 200m, 400m, 600m in diameter Vary cell size from 50m to 800m Key Observations ToD performs best on average when the cell size is smaller than the event size Larger cell size: bad for traffic from sources to cluster-heads Smaller cell size: bad for traffic from cluster-heads to sink Simulation Results – Cell Size
Conclusion • Structure-Free Aggregation • Dynamic Forwarding on ToD for Scalability • Efficient Aggregation without overhead of structure computation and maintenance • Future Work • Dynamic Forwarding for irregular network topology • Early aggregation irrespective of event size