580 likes | 748 Views
Lecture 2 Addendum: Software Platforms. Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Lecture uses slides from tutorials prepared by authors of these platforms. Outline. Platforms (contd.) SOS slides from UCLA Virtual machines (Maté) slides from UCB
E N D
Lecture 2 Addendum: Software Platforms Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Lecture uses slides from tutorials prepared by authors of these platforms
Outline • Platforms (contd.) • SOS slides from UCLA • Virtual machines (Maté) slides from UCB • Contiki slides from Upsaala
References • SOS Mobisys paper • SOS webpage • Mate: A Virtual Machine for Sensor Networks. ASPLOS • Mate webpage • Contiki Emnets Paper • Contiki webpage
SOS: Motivation and Key Feature • Post-deployment software updates are necessary to • customize the system to the environment • upgrade features • remove bugs • re-task system • Remote reprogramming is desirable • Approach: Remotely insert binary modules into running kernel • software reconfiguration without interrupting system operation • no stop and re-boot unlike differential patching • Performance should be superior to virtual machines
Dynamically Loadable Modules Light Sensor Light Sensor Tree Routing Application Function Pointer Control Blocks Kernel Services Dynamic Memory Scheduler Serial Framer Comm. Stack Sensor Manager Device Drivers Timer Hardware Abstraction Layer Clock UART ADC SPI I2C Architecture Overview Static Kernel • Provides hardware abstraction & common services • Maintains data structures to enable module loading • Costly to modify after deployment Dynamic Modules • Drivers, protocols, and applications • Inexpensive to modify after deployment • Position independent
SOS Kernel • Hardware Abstraction Layer (HAL) • Clock, UART, ADC, SPI, etc. • Low layer device drivers interface with HAL • Timer, serial framer, communications stack, etc. • Kernel services • Dynamic memory management • Scheduling • Function control blocks
Kernel Services: Memory Management • Fixed-partition dynamic memory allocation • Constant allocation time • Low overhead • Memory management features • Guard bytes for run-time memory overflow checks • Ownership tracking • Garbage collection on completion • pkt = (uint8_t*)ker_malloc(hdr_size + sizeof(SurgeMsg), SURGE_MOD_PID);
Kernel Services: Scheduling • SOS implements non-preemptive priority scheduling via priority queues • Event served when there is no higher priority event • Low priority queue for scheduling most events • High priority queue for time critical events, e.g., h/w interrupts & sensitive timers • Prevents execution in interrupt contexts • post_long(TREE_ROUTING_PID, SURGE_MOD_PID, MSG_SEND_PACKET, hdr_size + sizeof(SurgeMsg), (void*)packet, SOS_MSG_DYM_MANAGED);
Modules • Each module is uniquely identified by its ID or pid • Has private state • Represented by a message handler & has prototype: int8_t handler(void *private_state, Message *msg) • Return value follows errno • SOS_OK for success. -EINVAL, -ENOMEM, etc. for failure
Kernel Services: Module Linking • Orthogonal to module distribution protocol • Kernel stores new module in free block located in program memory and critical information about module in the module table • Kernel calls initialization routine for module • Publish functions for other parts of the system to use char tmp_string = {'C', 'v', 'v', 0}; ker_register_fn(TREE_ROUTING_PID, MOD_GET_HDR_SIZE, tmp_string, (fn_ptr_t)tr_get_header_size); • Subscribe to functions supplied by other modules char tmp_string = {'C', 'v', 'v', 0}; s->get_hdr_size = (func_u8_t*)ker_get_handle(TREE_ROUTING_PID, MOD_GET_HDR_SIZE, tmp_string); • Set initial timers and schedule events
Module A System Call System Messages System Jump Table SOS Kernel Interrupt HW Specific API Hardware Module–to–Kernel Communication • Kernel provides system services and access to hardware ker_timer_start(s->pid, 0, TIMER_REPEAT, 500); ker_led(LED_YELLOW_OFF); • Kernel jump table re-directs system calls to handlers • upgrade kernel independent of the module • Interrupts & messages from kernel dispatched by a high priority message buffer • low latency • concurrency safe operation High Priority Message Buffer
Module A Module B Module A Module B Indirect Function Call Post Module Function Pointer Table Message Buffer Inter-Module Communication Inter-Module Message Passing • Asynchronous communication • Messages dispatched by a two-level priority scheduler • Suited for services with long latency • Type safe binding through publish / subscribe interface Inter-Module Function Calls • Synchronous communication • Kernel stores pointers to functions registered by modules • Blocking calls with low latency • Type-safe runtime function binding
Module A Module B 3 2 1 Module Function Pointer Table Synchronous Communication • Module can register function for low latency blocking call (1) • Modules which need such function can subscribe to it by getting function pointer pointer (i.e. **func) (2) • When service is needed, module dereferences the function pointer pointer (3)
3 Module A 2 Network Module B 1 4 5 Msg Queue Send Queue Asynchronous Communication • Module is active when it is handling the message (2)(4) • Message handling runs to completion and can only be interrupted by hardware interrupts • Module can send message to another module (3) or send message to the network (5) • Message can come from both network (1) and local host (3)
Module Safety • Problem: Modules can be remotely added, removed, & modified on deployed nodes • Accessing a module • If module doesn't exist, kernel catches messages sent to it & handles dynamically allocated memory • If module exists but can't handle the message, then module's default handler gets message & kernel handles dynamically allocated memory • Subscribing to a module’s function • Publishing a function includes a type description that is stored in a function control block (FCB) table • Subscription attempts include type checks against corresponding FCB • Type changes/removal of published functions result in subscribers being redirected to system stub handler function specific to that type • Updates to functions w/ same type assumed to have same semantics
Memory Debug Surge Photo Sensor Tree Routing Module Library • Some applications created by combining already written and tested modules • SOS kernel facilitates loosely coupled modules • Passing of memory ownership • Efficient function and messaging interfaces Surge Application with Debugging
Module Design #include <module.h> typedef struct { uint8_t pid; uint8_t led_on; } app_state; DECL_MOD_STATE(app_state); DECL_MOD_ID(BLINK_ID); int8_t module(void *state, Message *msg){ app_state *s = (app_state*)state; switch (msg->type){ case MSG_INIT: { s->pid = msg->did; s->led_on = 0; ker_timer_start(s->pid, 0, TIMER_REPEAT, 500); break; } case MSG_FINAL: { ker_timer_stop(s->pid, 0); break; } case MSG_TIMER_TIMEOUT: { if(s->led_on == 1){ ker_led(LED_YELLOW_ON); } else { ker_led(LED_YELLOW_OFF); } s->led_on++; if(s->led_on > 1) s->led_on = 0; break; } default:return -EINVAL; } return SOS_OK; } • Uses standard C • Programs created by “wiring” modules together
Module A Module B Polled Access Periodic Access Signal Data Ready Sensor Manager getData dataReady MagSensor ADC I2C Sensor Manager • Enables sharing of sensor data between multiple modules • Presents uniform data access API to diverse sensors • Underlying device specific drivers register with the sensor manager • Device specific sensor drivers control • Calibration • Data interpolation • Sensor drivers are loadable: enables • post-deployment configuration of sensors • hot-swapping of sensors on a running node
Application Level Performance Comparison of application performance in SOS, TinyOS, and MateVM Surge Forwarding Delay Surge Tree Formation Latency Surge Packet Delivery Ratio Memory footprint for base operating system with the ability to distribute and update node programs. CPU active time for surge application.
Reconfiguration Performance • Energy trade offs • SOS has slightly higher base operating cost • TinyOS has significantly higher update cost • SOS is more energy efficient when the system is updated one or more times a week Energy cost of light sensor driver update Module size and energy profile for installing surge under SOS Energy cost of surge application update
Platform Support Supported micro controllers • Atmel Atmega128 • 4 Kb RAM • 128 Kb FLASH • Oki ARM • 32 Kb RAM • 256 Kb FLASH Supported radio stacks • Chipcon CC1000 • BMAC • Chipcon CC2420 • IEEE 802.15.4 MAC (NDA required)
Simulation Support • Source code level network simulation • Pthread simulates hardware concurrency • UDP simulates perfect radio channel • Supports user defined topology & heterogeneous software configuration • Useful for verifying the functional correctness • Instruction level simulation with Avrora • Instruction cycle accurate simulation • Simple perfect radio channel • Useful for verifying timing information • See http://compilers.cs.ucla.edu/avrora/ • EmStar integration under development
Mate: A Virtual Machine for Sensor Networks Why VM? • Large number (100’s to 1000’s) of nodes in a coverage area • Some nodes will fail during operation • Change of function during the mission Related Work • PicoJava assumes Java bytecode execution hardware • K Virtual Machine requires 160 – 512 KB of memory • XML too complex and not enough RAM • Scylla VM for mobile embedded system
Mate features • Small (16KB instruction memory, 1KB RAM) • Concise (limited memory & bandwidth) • Resilience (memory protection) • Efficient (bandwidth) • Tailorable (user defined instructions)
Mate in a Nutshell • Stack architecture • Three concurrent execution contexts • Execution triggered by predefined events • Tiny code capsules; self-propagate into network • Built in communication and sensing instructions
When is Mate Preferable? • For small number of executions • GDI example: Bytecode version is preferable for a program running less than 5 days • In energy constrained domains • Use Mate capsule as a general RPC engine
Mate Architecture • Stack based architecture • Single shared variable • gets/sets • Three events: • Clock timer • Message reception • Message send • Hides asynchrony • Simplifies programming • Less prone to bugs
Instruction Set • One byte per instruction • Three classes: basic, s-type, x-type • basic: arithmetic, halting, LED operation • s-type: messaging system • x-type: pushc, blez • 8 instructions reserved for users to define • Instruction polymorphism • e.g. add(data, message, sensing)
Code Example(1) • Display Counter to LED
Code Capsules • One capsule = 24 instructions • Fits into single TOS packet • Atomic reception • Code Capsule • Type and version information • Type: send, receive, timer, subroutine
Viral Code • Capsule transmission: forw • Forwarding other installed capsule: forwo (use within clock capsule) • Mate checks on version number on reception of a capsule -> if it is newer, install it • Versioning: 32bit counter • Disseminates new code over the network
Component Breakdown • Mate runs on mica with 7286 bytes code, 603 bytes RAM
Network Infection Rate • 42 node network in 3 by 14 grid • Radio transmission: 3 hop network • Cell size: 15 to 30 motes • Every mote runs its clock capsule every 20 seconds • Self-forwarding clock capsule
Bytecodes vs. Native Code • Mate IPS: ~10,000 • Overhead: Every instruction executed as separate TOS task
Installation Costs • Bytecodes have computational overhead • But this can be compensated by using small packets on upload (to some extent)
Customizing Mate • Mate is general architecture; user can build customized VM • User can select bytecodes and execution events • Issues: • Flexibility vs. Efficiency Customizing increases efficiency w/ cost of changing requirements • Java’s solution: General computational VM + class libraries • Mate’s approach: More customizable solution -> let user decide
How to … • Select a language -> defines VM bytecodes • Select execution events -> execution context, code image • Select primitives -> beyond language functionality
Constructing a Mate VM • This generates a set of files -> which are used to build TOS application and to configure script program
Compiling and Running a Program Send it over the network to a VM VM-specific binary code Write programs in the scripter
Bombilla Architecture • Once context: perform operations that only need single execution • 16 word heap sharing among the context; setvar, getvar • Buffer holds up to ten values; bhead, byank, bsorta
Bombilla Instruction Set • basic: arithmetic, halt, sensing • m-class: access message header • v-class: 16 word heap access • j-class: two jump instructions • x-class: pushc
Enhanced Features of Bombilla • Capsule Injector: programming environment • Synchronization: 16-word shared heap; locking scheme • Provide synchronization model: handler, invocations, resources, scheduling points, sequences • Resource management: prevent deadlock • Random and selective capsule forwarding • Error State
Discussion • Comparing to traditional VM concept, is Mate platform independent? Can we have it run on heterogeneous hardware? • Security issues: How can we trust the received capsule? Is there a way to prevent version number race with adversary? • In viral programming, is there a way to forward messages other than flooding? After a certain number of nodes are infected by new version capsule, can we forward based on need? • Bombilla has some sophisticated OS features. What is the size of the program? Does sensor node need all those features?
Contiki Dynamic loading of programs (vs. static) Multi-threaded concurrency managed execution (in addition to event driven) Available on MSP430, AVR, HC12, Z80, 6502, x86, ... Simulation environment available for BSD/Linux/Windows
Key ideas • Dynamic loading of programs • Selective reprogramming • Static vs dynamic linking • Concurrency management mechanisms • Events and threads • Trade-offs: preemption, size
Code AVR 1044 - 678 90 226 1934 5218 Contiki size (bytes) Module Kernel Program loader Multi-threading library Timer library Memory manager Event log replicator µIP TCP/IP stack Code MSP430 810 658 582 60 170 1656 4146 RAM 10 + e + p 8 8 + s 0 0 200 18 + b
Loadable programs • One-way dependencies • Core resident in memory • Language run-time, communication • Programs “know” the core • Statically linked against core • Individual programs can be loaded/unloaded Core
Loadable programs (contd.) • Programs can be loaded from anywhere • Radio (multi-hop, single-hop), EEPROM, etc. • During software development, usually change only one module Core
How well does it work? • Works well • Program typically much smaller than entire system image (1-10%) • Much quicker to transfer over the radio • Reprogramming takes seconds • Static linking can be a problem • Small differences in core means module cannot be run • We are implementing a dynamic linker
Revisiting Multi-threaded Computation • Threads blocked, waiting for events • Kernel unblocks threads when event occurs • Thread runs until next blocking statement • Each thread requires its own stack • Larger memory usage Thread Thread Thread Kernel