550 likes | 841 Views
Network Power Scheduling for wireless sensor networks. Barbara Hohlt Intel Communications Technology Lab Hillsboro, OR August 9, 2005. Outline. Introduction Radio Scheduling FPS Overview Implementation Micro Benchmarks Application Evaluation. Wireless Sensor Networks.
E N D
Network Power Schedulingfor wireless sensor networks Barbara Hohlt Intel Communications Technology Lab Hillsboro, OR August 9, 2005
Outline • Introduction • Radio Scheduling • FPS Overview • Implementation • Micro Benchmarks • Application Evaluation
Wireless Sensor Networks • Networks of small, low-cost, low-power devices • Sensing/actuation, processing, wireless communication • Dispersed near phenomena of interest • Self-organize, wireless multi-hop networks • Unattended for long periods of time
Berkeley Motes • Mica • Mica2Dot • Mica2
Mote Layout 14 5 ` 15 15 13 6 12 9 11 8 Example Applications Pursuer-Evader Environmental Monitoring Home Automation Indoor Building Monitoring Security Inventory Tracking
Power Consumption • Power consumption limits the utility of sensor networks • Must survive on own energy stores for months or years • 2 AA batteries or 1 Lithium coin cell • Replacing batteries is a laborious task and not possible in some environments • Conserving energy is critical for prolonging the lifetime of these networks
Where the power goes • Main energy draws • Central processing unit • Sensors/actuators • Radio • Radio dominates the cost of power consumption
Radio Power Consumption • Primary cost is idle listening • Time spent listening waiting to receive packets • Nodes sleep most of the time to conserve energy • Secondary cost is overhearing • Nodes overhear their neighbors communication • Broadcast medium • Dense networks • Must turn radio off • need a schedule
Flexible Power Scheduling • Flexible Power Scheduling • Reduces radio power consumption • Supports fluctuating demand (multiple queries, aggregates) • Adaptive and decentralized schedules • Improves power savings over approaches used in existing deployments • 4.3X over TinyDB duty cycling • 2–4.6X over GDI low-power listening • High end-to-end packet reception • Reduces contention • Increases end-to-end fairness and yield • Optimized per hop latency
Network Power Schedule CSMA MAC FPS Two-Level Architecture • Coarse-grain scheduling • At the network layer • Planned radio on-off times • Fine-grain CSMA MAC underneath • Reduces contention and increases end-to-end fairness • Distributes traffic • Decouples events from correlated traffic • Reserve bandwidth from source to sink • Does not require perfect schedules or precise time synchronization
Outline • Introduction • Radio Scheduling • FPS Overview • Implementation • Micro Benchmarks • Application Evaluation
Scheduling Approaches Approach Protocol Layer
idle listening (low-power mode) PHY Layer Low-power Listening • Radio periodically samples channel for incoming packets • Radio remains in low-power mode during idle listening • Fixed channel sample period per deployment • Supports general communication
frame MAC Layer S-MAC Scheduled Listening • Virtual Clustering, all nodes maintain and synchronize on schedules of their neighborhoods • Data transmitted during “sleep” period, otherwise radios turned off • Fixed duty-cycle per deployment • Supports general communication listen period “sleep” period SYN RTS CTS sleep or send data
Application Layer TinyDB Duty Cycling • All nodes sleep and wake at same time every epoch • All transmissions during waking period • Fixed duty-cycle per deployment • Supports a tree topology waking period epoch
Network Layer Flexible Power Scheduling • Each node has own local schedule • During idle time slots the radio is turned off • Schedules adapt continuously over time • Duty-cycles are adaptive • Supports tree topology cycles
Outline • Introduction • Radio Scheduling • FPS Overview • Implementation • Micro Benchmarks • Application Evaluation
Assumptions • Sense-to-gateway applications • Multihop network • Majority of traffic is periodic • Nodes are sleeping most of the time • Available bandwidth >> traffic demand • Routing component
slot cycle time T I I R I T The power schedule • Time is divided into cycles • Each cycle is divided into slots • Each node maintains a local power schedule of what operations it performs over a cycle T – Transmit R – Receive I - Idle
Scheduling flows • Schedule entire flows (not packets) • Make reservations based on traffic demand • Bandwidth is reserved from source to sink • (and partial flows from source to destination) • Reservations remain in effect indefinitely and can adapt over time
T I I R I T Adaptive Scheduling Local data structure local state • Demand represents how many messages a node seeks to forward each cycle • Supply is reserved bandwidth • The network keeps some preallocated bandwidth in reserve • Changes in reservations percolate up the network tree supply demand
Supply and Demand cycle supply demand • If supply < demand • Request reservation • If Conf -> Increment supply • If supply >= demand • Offer reservation • If Req ->Increment demand For the purposes of this example, we will say one unit of demand counts as one message per cycle.
T 1 window size = w R T 2 3 0 1 2 3 4 5 slot Reduced Latency Sliding Reservation Window Using only local information, the next Receive slot is always within w of the next Transmit slot putting an upper bound on the per hop latency of the network. supply demand cycle
Listen Receiver Initiated Scheduling Joining Protocol • Periodically nodes advertise available bandwidth • A node joining the network listens for advertisements and sends a request • Thereafter it can increase/decrease its demand during scheduled time slots Broadcast Rx Tx Receiver CONF REQ ADV Joiner Tx Rx
Receiver Initiated Scheduling Reservation Protocol • Periodically advertise available bandwidth • Nodes increase/decrease their demand during scheduled time slots • No idle listening Broadcast Rx Tx Receiver CONF REQ ADV Sender Tx Rx
Properties of supply/demand • All network changes cast as demand • Joining • Failure • Lossy link • Multiple queries • Mobility • 3 classes of node • Router and application • Router only • Application only • Load balancing
Outline • Introduction • Radio Scheduling • FPS Overview • Implementation • Micro Benchmarks • Application Evaluation
Implementation • HW • Mica • Mica2Dot • Mica2 • SW • Slackers • TinyDB/FPS (Twinkle) • GDI/FPS (Twinkle)
Architecture Application Flexible Power Scheduling Multihop Routing BufferManagement RandomMLCG TimeSync PowerManagement SendQueues Active Messages MAC/PHY • Radio power scheduling • Manages send queues • Provides buffer management
Outline • Introduction • Radio Scheduling • FPS Overview • Implementation • Micro Benchmarks • Application Evaluation
Micro Benchmarks Mica • Power Consumption • Fairness and Yield • Contention
3 2 1 0 Power consumption • 4 TinyOS Mica motes • 3-hop network • Node 3 sends one 36-byte packet per cycle • Measure the current at node 2 source gateway
Slackers. Early experiment on Mica. 5X savings Current in mA Avg 1.4 Time in seconds
1 2 3 4 5 6 Mica Experiments Scheduled (FPS) vs Unscheduled (Naïve) • 10 MICA motes plus base station • 6 motes send 100 messages across 3 hops • One message per cycle (3200ms) • Begin with injected start message • Repeat 11 times • Two Topologies • Single Area • one 8’ x 3’4” area • Multiple Area • five areas, motes are 9’-22’ apart
End-to-end Fairness and Yield FPS Naive
Outline • Introduction • Radio Scheduling • FPS Overview • Implementation • Micro Benchmarks • Application Evaluation
Application Evaluation • TinyDB/fps vs TinyDB/duty cycling • 4.3X power savings • Multiple queries • Partial flows • Query dissemination • Aggregation • GDI/fps vs GDI/lpl • 2-4.6X power savings • Up to 23 % increase in yield
Evaluation with TinyDB • Two implementations • TinyDB Duty Cycling • TinyDB FPS • Current Consumption Analysis • Berkeley Botanical Gardens Model • Acknowledgment: Sam Madden
TinyDB Redwood Deployment • 1/3 two hops • 2/3 one hop • 2 trees • 35 nodes BTS 0 18 1 2 17 3
3 Step Methodology • Estimate radio-on time for TinyDB/DC and TinyDB/FPS • No power management 3600 sec/hour • For FPS, validate the estimate at one mote with an experiment • Use Mica current measurements to estimate current consumption
TinyDB Duty Cycling 24 samples/hour * 4 sec/sample = 96 sec/hour 4 seconds 2.5 minutes All nodes wake up together for 4 seconds every 2.5 minutes. During the waking period nodes exchange messages and take sensor readings. Outside the waking period the processor, radio, and sensors are powered down.
0 1 2 Traffic Communication Broadcast 3 Flexible Power Scheduling 24 samples/hour * 0.767 sec/cycle = 18.4 sec/hour Node 1: 2 T, 3 A Node 2: 3 T, 2 R, 3 A Node 3: 2 T, 3 A 18 slots = 5 (node 1) + 8 (node 2) + 5 (node 3) 0.767 sec/cyc (per node) = 18 slots * 128 ms = 2.3 sec/cycle per 3 nodes
Current Consumption Current Consumption mA-seconds per hour = (On time) * (On draw) + (Off time) * (Off Draw) 4.39 X TinyDB/ Duty Cycling 803 mA-s/hr = 96 s/hr (8mA) + 3504 s/hr (.01mA) Mica1 TinyDB/FPS 183 mA-s/hr = 18.4 s/hr (8mA) + 3582 s/hr (.01mA) Mica1
Evaluation with GDI • Two implementations • GDI Low-Power Listening • GDI FPS • Experiments • Yield • Power Measurements • Power Consumption • Acknowledgement: Rob Szewczyk
MAC Layer GDI Low-Power Listening Each node wakes up periodically to sample the channel for traffic and goes right back to sleep if there is nothing to be received.
12 Experiments Mica2Dot • 30 mica2dot inlab testbed • 3 sets • GDI/lpl100 • GDI/lpl485 • GDI/Twinkle • 4 sample rates • 30 seconds • 1 minute • 5 minute • 20 minute
Measured Power Consumption Sample Period: 5 minutes Sample Period: 20 minute