200 likes | 299 Views
Building up to Macroprogramming : An Intermediate Language for Sensor Networks. Ryan Newton, Arvind (MIT) and Matt Welsh (Harvard) Presented by: Gordon Wong. Outline. Background Macroprogramming Motivation and Goal Distributed Token Machines (DTM) Token Machine Language (TML)
E N D
Building up to Macroprogramming: An Intermediate Language for Sensor Networks Ryan Newton, Arvind (MIT) and Matt Welsh (Harvard) Presented by: Gordon Wong
Outline • Background • Macroprogramming • Motivation and Goal • Distributed Token Machines (DTM) • Token Machine Language (TML) • Evaluation • Conclusion
Background • Sensor Network • Highly distributed System • Resource constraints • NesC • An extension of the C language • Low level • Concurrency, computation and communication mixed together • Difficult for the end-users to program • Need to develop better programming tools • High level program model
Macroprogramming • Allow application designer to write code in a high level language • Captures the operation of the sensor network as a whole • Compiled into a form that executes on individual nodes • http://www.eecs.harvard.edu/~mdw/proj/mp/
Motivation • Previous work of Newton and Welsh • Regiment [2004] • Region Streams: Functional Macroprogramming for Sensor Networks • Based on functional reactive programming • Functional macroprogramming language for sensor network • Represents nodes as streams of data • Grouped into regions for the purpose of in-network aggregation or detecting events
Problem Semantic gap between Regiment and NesC is large Leads to difficult compilation Solution An intermediate Language is needed Motivation Regiment Intermediate Language Semantic Difference NesC
Goal • An intermediate language for sensor network • Provides simple and versatile abstractions for communication, data dissemination, and remote execution • Constitutes a framework for network coordination that can be used to implement sophisticated algorithms • Requirement • Abstract away from the detail of concurrency and communication • Capture enough details for compiler’s optimization
Distributed Token Machine (DTM) • Distributed Token Machine • Token based execution and communication model • Communication happens through token • Typed message with a small payload • Tokens are associated with token handlers
Distributed Token Machine (DTM) • Scheduler – processes incoming token • Token Object – stored token • Shared Memory – shared among handlers • Token Message – the form tokens take when traveling over the network
DTM in Action • A token Message arrives at the scheduler. • The token name directs it to the corresponding handler. • The handler finds the corresponding token object in the token store. If the corresponding token object is not present, its memory is allocated and initialized to zero • The handler consumes the message payload and executes atomically • May read/write token object’s private memory or node’s shared memory • New messages may be posted
Token Handler • Interfaces • schedule(Ti, priority, data...) • timed_schedule(...) • Insert a token message to the node’s local scheduler • “Timed” - Insert a token message to the node’s local scheduler after a precise time period • bcast(Ti, data...) • Broadcast a token message to the neighbors • No ACK • is_scheduled(Ti) • deschedule(Ti) • Query/removal of token messages waiting in the scheduler • present(Ti) • evict(Ti) • Interface into the nodes’ Token Store as a whole • Query/remove of Token Objects to/from local node’s Token Store
Token Machine Language (TML) • Realization of the DTM • Fills in set of basic operators and a concrete syntax for describing handlers • Language used • Subset of C extended with the DTM interface • No data pointer • Only allows fixed length loop • Procedure call is the scheduling of token
Goal of TML • Goals of TML • Lightweight • Otherwise, compilation from high-level language to TML will be complex • Efficiently mapped onto TinyOS etc. • Different semantics: Event-driven • Otherwise, compiled executable will not be practically usable • Versatile • Applicability to wide range of systems • Because TML aims to be a common abstract layer • Mask the complexity • Because it is the fundamental reason for intermediate language
TML Sample Code Shared memory of the node Private Memory of the token object Subroutine call / schedule new token
Token Machine Language (TML) • Subroutine calls with return values • In the DTM model, it would be ideal to exclude them to keep the atomic actions small and fast • Another problem is the DTM model lacks a call-stack • Solution • Building return handler-calls on top of core TML using a continuation passing style (CPS) transformation. • Implicitly split-phase calls
Example Subcall (continuation handler) Subroutine Call
Token Machine Language (TML) • The Implementation process is bidirectional • Compiled down to NesC code • TML is build up by enriching it with features. • Gradients
Evaluation • Current implementation • High level simulator • Compiler targeting NesC/TinyOS Environment • Comparison with native TinyOS code • Code size is good • CPU and RAM usage are bad • Overhead when running the scheduler • Unnecessary buffer copying • Due to no pointer
Conclusion • Important TML qualities • The atomic action model of concurrency • Precludes deadlock • Makes reasoning about timing simple • Communication is bound to persistent storage (tokens) • Gives us a way to refer to communications that have happened through the token they leave behind