280 likes | 481 Views
WSN Software Platforms - concepts. Vinod Kulathumani Lecture uses some slides from tutorials prepared by authors of these platforms. Outline. Software concepts TinyOS. Traditional Systems. Well established layers of abstractions Strict boundaries Ample resources
E N D
WSN Software Platforms - concepts Vinod Kulathumani Lecture uses some slides from tutorials prepared by authors of these platforms
Outline • Software concepts • TinyOS
Traditional Systems • Well established layers of abstractions • Strict boundaries • Ample resources • Independent applications at endpoints communicate pt-pt through routers • Well attended Application Application User System Network Stack Transport Threads Network Address Space Data Link Files Physical Layer Drivers Routers
Sensor Network Systems • Highly constrained resources • processing, storage, bandwidth, power, limited hardware parallelism, relatively simple interconnect • Applications spread over many small nodes • self-organizing collectives • highly integrated with changing environment and network • diversity in design and usage • Concurrency intensive in bursts • streams of sensor data • Unclear where boundaries belong
Choice of Programming Primitives • Traditional approaches • monolithic event processing • full thread/socket posix regime • Alternative • provide framework for concurrency and modularity • never poll, never block • interleaving flows, events
Split-Phase operation • Consider sampling from ADC • Option 1: make ADC synchronous by blocking • CPU wasted • Option 2: achieve the same by using threads • Too much memory • Option 3: use split-phase operation • Commands • Events
TinyOS • most popular operating system for WSN • developed by UC Berkeley • features a component-based architecture • software is written in modular pieces called components • Each component denotes the interfaces that it provides • An interface declares a set of functions called commands • the interface provider implements • and another set of functions called events • the interface user should be ready to handle • Easy to link components together by “wiring” their interfaces to form larger components • similar to using Lego blocks
TinyOS • Microthreaded OS (lightweight thread support) and efficient network interfaces • Two level scheduling structure • Long running tasks that can be interrupted by hardware events • Small, tightly integrated design that allows crossover of software components into hardware
TinyOS • provides a component library that includes network protocols, services, and sensor drivers • An application consists of • a component written by the application developer and • the library components that are used by the components in (1)
Component Frame Tasks Commands Events • Commands and Events are function calls • Application: linking/glueing interfaces (events, commands) Components • A component has: • Frame (internal state) • Tasks (computation) • Interface (events, commands) • Frame : • one per component • statically allocated • fixed size
Commands/Events • commands: • deposit request parameters into the frame • are non-blocking • need to return status => postpone time consuming work by posting a task • can call lower level commands • events: • can call commands, signal events, post tasks • preempt tasks, not vice-versa • interrupt trigger the lowest level events • deposit the information into the frame
TinyOS Commands and Events { ... status = call CmdName(args) ... } command CmdName(args) { ... return status; } event EvtName(args) { ... return status; } { ... status = signal EvtName(args) ... }
Application = Graph of Components Route map Router Sensor Appln application Active Messages Example: ad hoc, multi-hop routing of photo sensor readings Serial Packet Radio Packet packet Temp Photo SW 3450 B code 226 B data HW UART Radio byte ADC byte clock RFM bit Graph of cooperating state machines on shared stack
Event-Driven Sensor Access Pattern • clock event handler initiates data collection • sensor signals data ready event • data event handler calls output command • device sleeps or handles other activity while waiting • conservative send/ack at component boundary command result_t StdControl.start() { return call Timer.start(TIMER_REPEAT, 200); } event result_t Timer.fired() { return call sensor.getData(); } event result_t sensor.dataReady(uint16_t data) { display(data) return SUCCESS; } SENSE LED Photo Timer
Introducing preemption • Preemption needed for concurrency and realtime operations • Important events cannot be missed • Can all code simply be preemptible? • Leads to unmanageable race conditions • TinyOS distinguishes async and sync code
Async and sync • Any TinyOS code can be preempted • Only Async code can preempt • Sync code cannot preempt • All code by default are sync • Interrupts are async (so that they can preempt running code) • Async methods call only async methods • Cannot call sync methods
Why all code cannot be async • The receive event that processes incoming message is “sync” • Why? Event Radioreceive (Msg) { // Process message X = Msg.a If (x>5) green led on else red led on } If this is sync and async code could call sync code, then ? • But then how does async “receive” interrupt signal radioreceive event [which is sync]? • Posts a task • Task signals radio receive event • Tasks execute sequentially • Tasks are “sync” but posting task is “async” command
Tasks • provide concurrency internal to a component • longer running operations • are preempted by async events • able to perform operations beyond event context • may call commands; may signal events • Only one instance of a task can be outstanding at any time • not preempted by tasks { ... post TskName(); ... } task void TskName { ... }
Typical Application Use of Tasks • event driven data acquisition • schedule task to do computational portion • Keep tasks short as well event result_t sensor.dataReady(uint16_t data) { putdata(data); post processData(); return SUCCESS; } task void processData() { int16_t i, sum=0; for (i=0; i ‹ maxdata; i++) sum += (rdata[i] ›› 7); display(sum ›› shiftdata); }
Why tasks need to be short • Otherwise processing of important events can be delayed • Example reception of message • Recall, when message received task posted to signal the received event • If task too long, then the message gets processed late • Instead tasks should be broken up into smaller tasks and keep reposting
Keep tasks short - example • Consider two cases • In case 1, a sensor processing component posts tasks that run for 5ms • In case 2, the processing components post tasks that run for 500us and repost themselves to continue the work • If the radio stack posts a task to signal an event that a message has arrived, what is the maximum expected latency for the radio task, incurred in the task queue in case 1 and case 2.
Task Scheduling • Currently simple fifo scheduler • Bounded number of pending tasks • When idle, shuts down node except clock • Uses non-blocking task queue data structure • Simple event-driven structure + control over complete application/system graph • instead of complex task priorities
Handling Concurrency: Async or Sync Code Async methods call only async methods (interrupts are async) Potential race conditions: any update to shared state from async code any update to shared state from sync code that is also updated from async code Compiler rule: if a variable x is accessed by async code, then any access of x outside of an atomic statement is a compile-time error Race-Free Invariant: any update to shared state is either not a potential race condition (sync code only) or is within an atomic section
Atomicity Support in nesC • Split phase operations require care to deal with pending operations • Race conditions may occur when shared state is accessed by premptible executions, e.g. when an event accesses a shared state, or when a task updates state (premptible by an event which then uses that state) • nesC supports atomic block • implemented by turning off interrupts • for efficiency, no calls are allowed in block • access to shared variable outside atomic block is not allowed
Benefits of using TinyOS • Separation of concerns • TinyOS provides a proper networking stack for wireless communication • abstracts away the underlying problems and complexity of message transfer from the application developer • E.g., MAC layer
Benefits of TinyOS • Modularity • facilitates reuse and reconfigurability • software is written in small functional modules • several middleware services are available as well-documented components • Over 500 research groups and companies using TinyOS • numerous groups actively contributing code to the public domain
Benefits of TinyOS • Concurrency control • TinyOS provides a scheduler that achieves efficient concurrency control • An interrupt-driven execution model is needed to achieve a quick response time for the events and capture the data • For example, a message transmission may take up to 100msec, and without an interrupt-driven approach the node would miss sensing and processing of interesting data in this period • Scheduler takes care of the intricacies of interrupt-driven execution and provides concurrency in a safe manner by scheduling the execution in small threads.
TinyOS Limitations • Static allocation allows for compile-time analysis, but can make programming harder • No support for heterogeneity • Support for other platforms (e.g. stargate) • Support for high data rate apps (e.g. acoustic beamforming) • Interoperability with other software frameworks and languages • Limited visibility • Debugging • Intra-node fault tolerance • Robustness solved in thedetails of implementation • nesC offers only some types of checking