1 / 22

Melete: Supporting Concurrent Applications in Wireless Sensor Networks

Melete: Supporting Concurrent Applications in Wireless Sensor Networks. ACM SenSys November 3 2006. Yang Yu, Loren J. Rittle Pervasive Platform and Architecture Labs Application Research Center Motorola Labs Vartika Bhandari University of Illinois at Urbana-Champaign Jason B. LeBrun

mayda
Download Presentation

Melete: Supporting Concurrent Applications in Wireless Sensor Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Melete: Supporting Concurrent Applications in Wireless Sensor Networks ACM SenSys November 3 2006 Yang Yu, Loren J. Rittle Pervasive Platform and Architecture Labs Application Research Center Motorola Labs Vartika Bhandari University of Illinois at Urbana-Champaign Jason B. LeBrun University of California, Davis

  2. Temperature monitoring app. (static) Structure health monitoring app. (slow dynamic) Personnel/asset tracking app. (fast dynamic) Motivation wall Motorola Labs, UIUC, UC Davis 2/21

  3. Challenges • Ease of programming for app. domain experts • Little knowledge of NesC and TinyOS • Support of concurrent applications • Reliable code storage and execution • Flexible and dynamic application deployment • Ease of software maintenance and upgrade: • Consistent and reliable code distribution Motorola Labs, UIUC, UC Davis 3/21

  4. State of the Art Group-based Group-differentiated Motorola Labs, UIUC, UC Davis 4/21

  5. Group 1 Group 2 Group 3 Group-Based Application Deployments Group: a subset of sensor nodes to host an application, a logical concept • Groups may overlap • Group members may arbitrarily distributed • Groups can be statically formed and dynamically adjusted Motorola Labs, UIUC, UC Davis 5/21

  6. Grouping Operations • Initially, all nodes belong to group 0 only • Use group 0 code for grouping operations • To join a group • Eg 1: Group 0 randomly choose 1 node from every room to join group 1 • Eg 2: Group 0 continuously senses temp; join group 2 if temp > 100 • Eg 3: Group 2 continuously senses temp; leave group 2 if temp < 80 • After joining a group • Download app. code on-demand • Execute the code when triggered Motorola Labs, UIUC, UC Davis 6/21

  7. Code Example • Tracking a shadow using group 1 Group 0 code: (Timer context) a = light(); if (a < 100) join_group(1); end if Group 1 code: (Timer context) a = light(); if (a < 120) readings []= a; if (sizeof(readings) = 10) send(readings); clear(readings); end if else send(readings); leave_group(group()); end if • We can do more: • data aggregation at cluster head • forming group 2 around group 1 to get the contour of the shadow Motorola Labs, UIUC, UC Davis 7/21

  8. Contributions • Node-level virtual machine • To host and execute concurrent apps. • TinyScript extension for grouping operations • System-level dynamic grouping • Flexible, on-the-fly app. deployment • Selective and reactive code distribution • Group-differentiated code dissemination • Active ads. + passive dissemination • Lazy forwarding • Progressive flooding • Randomized caching Motorola Labs, UIUC, UC Davis 8/21

  9. System Assumptions • We assume • Network is connected • Gateway stores code for all groups • Omni-directional broadcast • We do NOT assume • Unique node ID • Localization & synchronization • Communication reliability • Routing protocols Motorola Labs, UIUC, UC Davis 9/21

  10. New: group-differentiation New: separated app. execution space G0 G1 G2 G9 New: separated app. code storage Software Architecture code distributor engine states M F opcodes scheduler S R VM contexts memory Broadcast G0 G9 Timer Reboot Once TinyOS Hardware Motorola Labs, UIUC, UC Davis 10/21

  11. Pf # of requests Group-Differentiated Code Dissemination R M M M S M M F • Optimization techniques • Lazy forwarding: reduces false-positive forwarding • Progressive flooding: limits flooded area • Caching: reduces forwarding efforts M M F M F S M M R R M F S Motorola Labs, UIUC, UC Davis 11/21

  12. Leverages periodic broadcast and lazy forwarding No central controller Autonomous expansion 2 to 3-fold traffic reduction compared to expanding-ring method Graceful time-traffic tradeoffs C(H) = O(H), T(H)=O(1/H) H: propagation step width # of forwarded packets z: # of responders Progressive Flooding B A Motorola Labs, UIUC, UC Davis 12/21

  13. Square root policy Randomized Caching • One packet to be cached per node • Static analysis • Given: • k groups, {G1, G2, …, Gk}, with Gi having mi members and qi requests (all uniformly distributed) • Total Q cacheable packets • C(H) = C(1) • Minimize: total traffic to fulfill all requests • Solution: # of caches for • On-line policy • Caches the latest forwarded code packet • Mimics the middle point between linear and uniform policies Motorola Labs, UIUC, UC Davis 13/21

  14. blocking phenomena Usefulness of Caching # of forwarded packets • TOSSIM-packet • 20x20 grid • 400 nodes • 15 feet space • 4 groups with 20 members • 20 – 40 requests per group in 10 minutes # of packets per app. code # of packets per app. code # of packets per app. code # of packets per app. code Motorola Labs, UIUC, UC Davis 14/21

  15. Scalability to Node Density • TOSSIM-packet • 20x20 grid • 400 nodes • 5, 10, 15, 20 feet space • 4 groups with 20 members • 20 – 40 requests per group in 10 minutes # of packets per app. code # of forwarded packets # of packets per app. code # of packets per app. code Motorola Labs, UIUC, UC Davis 15/21

  16. Load Balancing • TOSSIM-bit • 400 nodes • 4 groups with 20 members Transmissions Receptions • 20x20 grid • 15 feet space • 20 – 40 requests per group Motorola Labs, UIUC, UC Davis 16/21

  17. S4 S3 S2 gateway Comparison to Trickle • TOSSIM-packet • 20x20 grid • 400 nodes • 1 group • Scenarios: ρ (10, 20, 40) random members from • S1: all 400 nodes • BL: all 400 nodes Motorola Labs, UIUC, UC Davis 17/21

  18. Empirical Study – Code updating Gateway • 16 nodes • 3 members • 3-4 hops • 2 packets per code Members Motorola Labs, UIUC, UC Davis 18/21

  19. Empirical Study – Shadow Tracking Shadow Gateway Motorola Labs, UIUC, UC Davis 19/21

  20. Current Status & Future Directions • Current status • Dynamic memory management • Aggressive caching policy • More efficient code forwarding • Forwarder suppression • Real-world impact • Being adopted by ETRI, South Korean • Future directions • Enhanced group operations • Virtual memory • Validation through real-world applications Motorola Labs, UIUC, UC Davis 20/21

  21. Q & A Thanks!! Motorola Labs, UIUC, UC Davis 21/21

  22. Related Work • Programming models • Mobile agents (push-based): Agilla, Sensorware • TinyCubus • Hood, Abstract Regions • Info. dissemination and searching • Trickle, PSFQ, Deluge, Impala • N-ring model, Acquire, rumor routing Motorola Labs, UIUC, UC Davis 22/21

More Related