370 likes | 527 Views
Contiki. A Lightweight and Flexible Operating System for Tiny Networked Sensors Presented by: Jeremy Schiff. Objectives. Lightweight Event Driven Model Has Multi-Threading Support as Library Dynamic Loading and Replacement of Individual Services. Contiki Motivations.
E N D
Contiki A Lightweight and Flexible Operating System for Tiny Networked Sensors Presented by: Jeremy Schiff
Objectives • Lightweight • Event Driven Model • Has Multi-Threading Support as Library • Dynamic Loading and Replacement of Individual Services
Contiki Motivations • No memory protection between apps • Kernel very minimal • CPU multiplexing • Loading Programs • All other abstractions by libraries • Custom Applications
Differentiation • TinyOS • Statically linked entire applications • MagnetOS • Virtual Machine – byte code • Mantis • Pure MultiThread • Contiki • Dynamic Linking of binaries • Event/Thread Hybrid
Why Loadable Applications • Smaller file to upload • Less Energy • Less Dissemination Time
Why No Threads? • Thread stacks must be allocated at creation time • Stack Memory Over-Provisioned • No Virtual Memory System • No Memory Protection Mechanisms • Locking for state exclusion • Stack memory is inaccessible to other threads
Events • No locking • Only one running at a time • Only one stack • Locking rare • Hard to express some things as state machine • Problem: Cryptography takes 2 seconds, everything else blocked. • Solution: Preemptive Threads
System Overview • Service – Something that implements functionality for more than one application • Apps have direct access to hardware • Single Address Space • Inter-process communication via event posting • Always Through Kernel
Core vs. Loaded Program • Partitioned at compile time • Core • Single Binary • Ideally never modified • Program • Easily upgraded
Kernel • Execution via: Kernel Events or Polling Events • No preemption of scheduled events • Synchronous vs Asynchronous Events • Synch: Like Function Call • Asynch: Like posting new event • Asynch reduces stack space • Debugging issues
Two Level Scheduling • Events • Can’t preempt one another • Interrupts • Can preempt events • Can use “underyling real-time executive” • Provides realtime guarantees • Non-hardware • Can’t post events • Use polling flag instead • Prevent race conditions
Loading Programs • Relocation Information in Binary • Check for space before loading • Call Initialization Function • Replace or Starts new Program
Power Save Mode • No explicit Kernel assistance • Access to event queue size
Services • Application • Service Layer • Service Interface • Service Process
Application • Must dynamically link • Interact via Service Stub • Compiled in • Version number • Caches Service ProcessID
Service Layer • Works with Kernel • Provides lookup for specific service • Returns a Service Interface • Has pointers to all functions service provides • Contains a version Number • Interface description implemented by Service Process • Version numbers must match • Failure Support? MANTIS thinks about this better.
Service Replacement • Process ID must be preserved • Kernel supported • Kernel instructs service to remove itself • Could lead to problems • Kernel provides mechanism to transfer state • Service state also versioned • Service state must be stored in a shared space during swap due to reallocations of old version’s space • Is there a better way to do this - SOS?
Libraries • Application options • Static link with core • Static link with libraries • Single binary • Call a service • Memcpy() vs Atoi() • MemCpy in Core
Communication • Architecture makes easy to replace Communication Stack • A service like any other • Enables multiple communication stack for comparison • Comm Services use Service Mechanism to call one another • Comm Services use Synchronous events to communicates with applications • No copying of buffers because Synch events can’t be preempted
Multi-Threading • Optional Library linked in • Kernel interaction • Platform Independent • Stack Switching / Preemption primitives • Platform Dependent
Over the air Programming • 1 node took 30 seconds • 40 nodes took 30 minutes • 1 component for 40 nodes took 2 minutes • Used naïve protocol
Code Size • Bigger than TinyOS, Smaller than Mantis • Polling Handlers and Increase Flexibility • Less compiler optimization possible
Preemption Demo • Start 8 second process at 5 Seconds and continually ping • ~.1 ms latency increase • Poll handler caused spikes • Mainly Radio Packet driver
Portability • Custom Code in a port • Boot up code • Device Drivers • Architecture specific parts of program loader • Stack switching code of the multithreading library • Kernel and Service layer need no changes