210 likes | 334 Views
Programming Vast Networks of Tiny Devices. David Culler University of California, Berkeley Intel Research Berkeley. http://webs.cs.berkeley.edu. Programmable network fabric . Architectural approach new code image pushed through the network as packets
E N D
Programming Vast Networks of Tiny Devices David Culler University of California, Berkeley Intel Research Berkeley http://webs.cs.berkeley.edu
Programmable network fabric • Architectural approach • new code image pushed through the network as packets • assembled and verified in local flash • second watch-dog processor reprograms main controller • Viral code approach (Phil Levis) • each node runs a tiny virtual machine interpreter • captures the high-level behavior of application domain as individual instructions • packets are “capsule” sequence of high-level instructions • capsules can forward capsules • Rich challenges • security • energy trade-offs • DOS pushc 1 # Light is sensor 1 sense # Push light reading pushm # Push message buffer clear # Clear message buffer add # Append val to buffer send # Send message using AHR forw# Forward capsule halt # NEC Intro
Subroutines Events 0 1 2 3 Clock Send Receive gets/sets Code Operand Stack Mate Context PC Return Stack Mate’ Tiny Virtual Machine • Comm. centric stack machine • 7286 bytes code, 603 bytes RAM • dynamicly typed • Four context types: • send, receive, clock, subroutine (4) • each 24 instructions • Fit in a single TinyOS AM packet • Installation is atomic • Self-propagating • Version information NEC Intro
Case Study: GDI • Great Duck Island application • Simple sense and send loop • Runs every 8 seconds – low duty cycle • 19 Maté instructions, 8K binary code • Energy tradeoff: if you run GDI application for less than 6 days, Maté saves energy NEC Intro
Higher-level Programming? • Ideally, would specify the desired global behavior • Compilers would translate this into local operations • High-Performance Fortran (HPF) analog • program is sequence of parallel operations on large matrices • each of the matrices are spread over many processors on a parallel machine • compiler translates from global view to local view • local operations + message passing • highly structured and regular • We need a much richer suite of operations on unstructured aggregates on irregular, changing networks NEC Intro
Query, Trigger Data App Sensor Network TinyDB Sensor Databases – a start • Relational databases: rich queries described by declarative queries over tables of data • select, join, count, sum, ... • user dictates what should be computed • query optimizer determines how • assumes data presented in complete, tabular form • First step: database operations over streams of data • incremental query processing • Big step: process the query in the sensor net • query processing == content-based routing? • energy savings, bandwidth, reliability • SELECT AVG(light) • GROUP BY roomNo NEC Intro
Motivation: Sensor Nets and In-Network Query Processing • Many Sensor Network Applications are Data Oriented • Queries Natural and Efficient Data Processing Mechanism • Easy (unlike embedded C code) • Enable optimizations through abstraction • Aggregates Common Case • E.g. Which rooms are in use? • In-network processing a must • Sensor networks power and bandwidth constrained • Communication dominates power cost • Not subject to Moore’s law! NEC Intro
SELECT AVG(light) FROM sensors WHERE sound < 100 GROUP BY roomNo HAVING AVG(light) < 50 SQL Primer • SQL is an established declarative language; not wedded to it • Some extensions clearly necessary, e.g. for sample rates • We adopt a basic subset: • ‘sensors’ relation (table) has • One column for each reading-type, or attribute • One row for each externalized value • May represent an aggregation of several individual readings SELECT {aggn(attrn), attrs} FROM sensors WHERE {selPreds} GROUP BY {attrs} HAVING {havingPreds} EPOCH DURATION s NEC Intro
TinyDB Demo (Sam Madden) Joe Hellerstein, Sam Madden, Wei Hong, Michael Franklin NEC Intro
Tiny Aggregation (TAG) Approach • Push declarative queries into network • Impose a hierarchical routing tree onto the network • Divide time into epochs • Every epoch, sensors evaluate query over local sensor data and data from children • Aggregate local and child data • Each node transmits just once per epoch • Pipelined approach increases throughput • Depending on aggregate function, various optimizations can be applied hypothesis testing NEC Intro
Aggregation Functions • Standard SQL supports “the basic 5”: • MIN, MAX, SUM, AVERAGE, and COUNT • We support any function conforming to: Aggn={fmerge, finit, fevaluate} Fmerge{<a1>,<a2>} <a12> finit{a0} <a0> Fevaluate{<a1>} aggregate value (Merge associative, commutative!) Partial Aggregate Example: Average AVGmerge {<S1, C1>, <S2, C2>} < S1 + S2 , C1 + C2> AVGinit{v} <v,1> AVGevaluate{<S1, C1>} S1/C1 NEC Intro
Query Propagation • TAG propagation agnostic • Any algorithm that can: • Deliver the query to all sensors • Provide all sensors with one or more duplicate free routes to some root • simple flooding approach • Query introduced at a root; rebroadcast by all sensors until it reaches leaves • Sensors pick parent and level when they hear query • Reselect parent after k silent epochs Query 1 P:0, L:1 2 3 P:1, L:2 P:1, L:2 4 P:2, L:3 6 P:3, L:3 5 P:4, L:4 NEC Intro
1 2 3 4 5 Illustration: Pipelined Aggregation SELECT COUNT(*) FROM sensors Depth = d NEC Intro
1 2 3 4 5 Illustration: Pipelined Aggregation SELECT COUNT(*) FROM sensors Epoch 1 1 Sensor # 1 1 1 Epoch # 1 NEC Intro
1 2 3 4 5 Illustration: Pipelined Aggregation SELECT COUNT(*) FROM sensors Epoch 2 3 Sensor # 1 2 2 Epoch # 1 NEC Intro
1 2 3 4 5 Illustration: Pipelined Aggregation SELECT COUNT(*) FROM sensors Epoch 3 4 Sensor # 1 3 2 Epoch # 1 NEC Intro
1 2 3 4 5 Illustration: Pipelined Aggregation SELECT COUNT(*) FROM sensors Epoch 4 5 Sensor # 1 3 2 Epoch # 1 NEC Intro
1 2 3 4 5 Illustration: Pipelined Aggregation SELECT COUNT(*) FROM sensors Epoch 5 5 Sensor # 1 3 2 Epoch # 1 NEC Intro
Discussion • Result is a stream of values • Ideal for monitoring scenarios • One communication / node / epoch • Symmetric power consumption, even at root • New value on every epoch • After d-1 epochs, complete aggregation • Given a single loss, network will recover after at most d-1 epochs • With time synchronization, nodes can sleep between epochs, except during small communication window • Note: Values from different epochs combined • Can be fixed via small cache of past values at each node • Cache size at most one reading per child x depth of tree 1 2 3 4 5 NEC Intro
Testbench & Matlab Integration • Positioned mica array for controlled studies • in situ programming • Localization (RF, TOF) • Distributed Algorithms • Distributed Control • Auto Calibration • Out-of-band “squid” instrumentation NW • Integrated with MatLab • packets -> matlab events • data processing • filtering & control NEC Intro
Acoustic Time-of-Flight Ranging no calibration: 76% error • Sounder/Tone Detect Pair • Emit Sounder pulse and RF message • Receiver uses message to arm Tone Detector • Key Challenges • Noisy Environment • Calibration • On-mote Noise Filter • Calibration fundamental to “many cheap” regime • variations in tone frequency and amplitude, detector sensitivity • Collect many pairs • 4-parameter model for each pair • T(A->B, x) = OA + OB + (LA + LB )x • OA , LA in message, OB, LB local joint calibration: 10.1% error NEC Intro