1 / 16

Dynamic Node Activation in Networks of Rechargeable Sensors

Dynamic Node Activation in Networks of Rechargeable Sensors. Koushik Kar , Ananth Krishnamurthy and Neeraj Jaggi 2006 – IEEE/ACM Transactions on Networking. Motivation. rechargeable sensors spend a lot of time recharging not available for sensing

rowa
Download Presentation

Dynamic Node Activation in Networks of Rechargeable Sensors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dynamic Node Activation in Networks of Rechargeable Sensors KoushikKar, Ananth Krishnamurthy and NeerajJaggi2006 – IEEE/ACM Transactions on Networking

  2. Motivation • rechargeable sensors spend a lot of time recharging • not available for sensing • recharging usually take much longer than discharging • solution? redundant deployment • works like this: • while some sensors sleep… • …others are fully charged and ready for sensing. • reasoning: • if a large number of sensors are deployed • it is more likely that one will be available for sensing when needed • questions to answer… • where is the line of diminishing returns from adding more sensors? • when should charged and waiting sensors be switched on for sensing?

  3. Given: • a stationary network of rechargeable sensors • fixed locations in fixed coverage area • sensors are offline during recharge

  4. Find: • a sensor activation policy • when to become active • apply globally • configure locally • maximize the “utility” derived from the network • measure of the network’s benefit • in terms of utility per unit areaper unit time from N active sensors over area A • example: probability of detection • U(n) = 1 − (1 − pd)n

  5. Contributions: • Demonstrate that utility, resulting from switching sensors on and off at the right times, can be modeled as a queuing network • This model can be solved using a steady-state Markov chain analysis if a threshold activation policy solution class is used • Show that the time-average utility of an optimal threshold activation policy is at least ¾ of the upper bound • Show that this solution holds both for completely redundant coverage and for partially overlapping sensor coverage • Show that correlation of charge/discharge cycles significantly degrades network utility

  6. Activation as queuing network… • Sensor may be in one of 3 states • passive = switched off, charging • ready = charged & waiting • active = sensing and transmitting

  7. Activation as queuing network… • Activation from passive state • passive (charged) sensors in region queue up to become active • active sensors deplete their energy and need replacement from the queue • Charging behavior affects model • correlated = charge/discharge cycles synchronized • independent = charge/discharge cycles randomized

  8. Optimize network for “utility” • “Utility” = measure of effectiveness • utility under policy P is: U(P) = • for motivating example… • Use: probability of detection • U(n) = 1 − (1 − pd)n • where n is number of active sensors in area, A, at time, t, under policy P • Problem restated… • find the policy, P • such that the utility of the network is maximized

  9. Simplify the problem… • Problem is intractable as stated… • Simplifying assumptions (for analysis): • sensor distribution is completely redundant • utility curve is concave down (asymptotic limit) • recharge rate << discharge rate • no energy is lost in passive (ready) state • activation will be based upon a threshold policy

  10. Toeholds from the simplification… • Domain is reduced • from all activation policies • to onlythreshold activation policies • if number of sensors does not exceed m • activate ready sensor, s • Utility is simplified • was integral over Area and time • now, only integral over time • Analytical tools can be used • steady-state Markov decision problem • upper and lower bounds on utility may be calculated

  11. Theoretical bounds on utility • Maximum utility for all policies • where • N is number of sensors • mu(1) is recharge time • mu(2) is discharge time • Minimum utility • ½ Maximum

  12. Bounds on threshold policies • Lower bound for correlated charge cycles • UT,C ≥ ¾ of maximum bound • thus best threshold policy is at least ¾ of optimum • Lower bound for independent charge cycles • UT,I ≥ UT,C ≥ ¾ of optimal policy • independent charge cycles will be as good or better than correlated charge cycles

  13. Finding a threshold • Given the utility function • probability of detection • U(n) = 1 − (1 − pd)n • Find utility vs. threshold for… • prob of detection, pd= {0.1, 0.9} • num of sensors, N = {16, 32, 48} • charge/ discharge ratio, rho = {3, 7, 15} • variable discharge rate, mu(2) • constant charge rate, mu(1)

  14. Activation algorithm • Objective • maintain utility of U(m) • At each decision point, for each sensor • if local utility < U(m) then • activate • else • remain ready • Calculate local utility • know coverage area of neighbors (have map) • poll for neighbors’ activation state • calculate utility per …

  15. Partial coverage overlap • Optimal time-average utility • where • N(A) = number of active sensors in area A

  16. Results

More Related