310 likes | 464 Views
Decomposing Data-Centric Storage Query Hot-Spots in Sensor Networks. Mohamed Aly In collaboration with Panos K. Chrysanthis and Kirk Pruhs Advanced Data Management Technologies Lab Dept. of Computer Science University of Pittsburgh. Motivating Application: Disaster Management.
E N D
Decomposing Data-Centric Storage Query Hot-Spots in Sensor Networks Mohamed Aly In collaboration with Panos K. Chrysanthis and Kirk Pruhs Advanced Data Management Technologies Lab Dept. of Computer Science University of Pittsburgh
Disaster Management Sensor Networks • Sensors are deployed to monitor the disaster area. • First responders moving in the area issue ad-hoc queries to nearby sensors. • The sensor network is responsible of answering these queries. • First responders use query results to improve the decision making process in the management process of the disaster.
Data Storage Options in Sensor Networks • Base Station Storage: • Events are sent to base stations where queries are issued and evaluated. • Best suited for continuous queries. • In-Network Storage (INS): • Events are stored in the sensor nodes. • Best suited for ad-hoc queries. • All previous INS schemes were Data-Centric Storage (DCS) schemes.
Data-Centric Storage (DCS) • Quality of Data (QoD) of ad-hoc queries. • Define an event owner based on the event value. • Examples: • Distributed Hash Tables (DHT) [Shenker et. al., HotNets’02] • Geographic Hash Tables (GHT) [Ratnasamy et. al., WSNA’02] • Distributed Index for Multi-dimensional data (DIM)[Li et. al., SenSys’03] • Greedy Perimeter Stateless Routing algorithm (GPSR)[Karp & Kung, Mobicom’00] • Among the above schemes, DIM has been shown to exhibit the best performance.
Problems of Current DCS Schemes • Storage Hot-Spots: • A large percentage of events is mapped to few sensor nodes. • Our Solutions • The Zone Sharing (ZS) algorithm on top of DIM [DMSN’05] • The K-D Tree based DCS scheme (KDDCS) [submitted] • Query Hot-Spots: • A large percentage of queries target events stored in few sensor nodes. • Our Solutions [MOBIQUITOUS’06] • The Zone Partitioning (ZP) algorithm • The Zone Partial Replication (ZPR) algorithm
Query Hot-Spots in DIM • Definition: A high percentage of queries accessing a “hot zone” stored by a small number of nodes. • Existence of query hot-spots leads to: • Increased node deaths • Network Partitioning • Reduced network lifetime • Decreased Quality of Data (QoD)
Query Hot-Spots Decomposition Algorithms • Uniform vs. skewed distribution of the number of accesses among the hot-zone events: • The Zone Partitioning (ZP) algorithm • The Zone Partial Replication (ZPR) algorithm • Basic Idea: • Each sensor keeps track of the Average Querying Frequency (AQF) of its stored events • Periodically compares its AQF to its neighbors’ AQFs • In case a large difference is detected, the node (donor) selects the Best neighbor (receiver) that can receive part of its responsibility range • Donor locally determines receiver • Partitioning Criterion (PC)
Query Hot-Spots Decomposition Algorithms • Uniform vs. skewed distribution of the number of accesses among the hot-zone events: • The Zone Partitioning (ZP) algorithm • The Zone Partial Replication (ZPR) algorithm • Basic Idea: • Each sensor keeps track of the Average Querying Frequency (AQF) of its stored events • Periodically compares its AQF to its neighbors’ AQFs • In case a large difference is detected, the node (donor) selects the Best neighbor (receiver) that can receive part of its responsibility range • Donor locally determines receiver • Partitioning Criterion (PC)
Query Hot-Spots Decomposition Algorithms • Uniform vs. skewed distribution of the number of accesses among the hot-zone events: • The Zone Partitioning (ZP) algorithm • The Zone Partial Replication (ZPR) algorithm • Basic Idea: • Each sensor keeps track of the Average Querying Frequency (AQF) of its stored events • Periodically compares its AQF to its neighbors’ AQFs • In case a large difference is detected, the node (donor) selects the Best neighbor (receiver) that can receive part of its responsibility range • Donor locally determines receiver • Partitioning Criterion (PC)
PC: Storage Safety Requirement • The sum of the pre-partitioning load of the receiver and the traded zone should be less than the receiver’s storage capacity • T + lreceiver ≤ S
PC: Energy Safety Requirement (1) • The energy consumed by the donor in the partitioning process should be much less than the total energy of the donor • T / edonor ≤ E1 • E1 ≤ 0.5
PC: Energy Safety Requirement (2) • The energy consumed by the receiver in the partitioning process should be much less than the total energy of the receiver • (T * re) / ereceiver ≤ E2 • E2 ≤ 0.5
PC: Access Frequency Requirement • The average access frequency of the donor is much larger than that of the receiver • AQF(donor) / AQF(receiver) ≥ Q1 • Q1 should be greater than 2 to avoid cyclic migrations
ZPR Initiation Requirements • In case all previous requirements are satisfied: • ZP initiated • If a hot sub range of small size exists within the hot range ZPR initiated instead of ZP • AQF(hot sub range) / AQF(total range) ≥ Q2 • Q2 should be close to 1, for e.g. 0.9 • size(hot sub range) / size(total range) ≤ Q3 • Q3 should be close to 0, for e.g. 0.2
Partitioning Criterion (PC) • T + lreceiver ≤ S • T / edonor ≤ E1 • (T * re) / ereceiver ≤ E2 • AQF(donor) / AQF(receiver) ≥ Q1 • AQF(hot sub range) / AQF(total range) ≥ Q2 • size(hot sub range) / size(total range) ≤ Q3 1:4 satisfied ZP initiated 1:6 satisfied ZPR initiated
More about the Algorithms • Mechanism to lower messaging overhead • GPSR Modifications • Traded Zone List (TZL) • Coalescing Process • Insertion process in ZPR • Bound on the replication hops of ZPR
Roadmap • Background • Problem Statement: Query Hot-spots • Algorithms: ZP, ZPR • Experimental Results • Conclusions
Simulation Description • Compare: DIM, ZP/ZPR. • Simulator similar to the DIM’s [Li et. al., SenSys’03] • Two phases: insertion & query. • Insertion phase (to achieve a steady state of network storage) • Each sensor initiates 5 events • Events forwarded to owners • Query phase • Each sensor generates 20 single-event queries (worst case scenario)
Experimental Results: Quality of Data (QoD) 5% hot-spot
Experimental Results: Balancing Energy Consumption 200 nodes, 0.33% hot-spot
Experimental Results: ZP/ZPR Strengths • Increasing the QoD by partitioning the hot range among a large number of sensors, thus, balancing the query load among sensors and keep them alive longer to answer more queries. • Increasing energy savings by balancing energy consumption among sensors. • Increasing the network lifetime by reducing node deaths.
Acknowledgment • This work is part of the “Secure CITI: A Secure Critical Information Technology Infrastructure for Disaster Management (S-CITI)” project funded through the ITR Medium Award ANI-0325353 from the National Science Foundation (NSF). • For more information, please visit: http://www.cs.pitt.edu/s-citi/
Conclusions and Extensions • Query Hot-Spots: An important problem in current DCS schemes. • Contribution: • A query hot-spots decomposition scheme for DCS sensor nets, ZP/ZPR, working on top of the DIM DCS scheme. • Experimental validation of the ZP/ZPR practicality • Work under submission: • KDDCS: A unified DCS scheme for load balancing storage and query loads.
Thank You Questions ? Advanced Data Management Technologies Lab http://db.cs.pitt.edu
Experimental Results: Load Balancing 0.05% hotspot
Experimental Results: Load Balancing 0.05% hot-spot