820 likes | 964 Views
Concurrency and Real-Time Programming Support in Java™, Ada and POSIX. Tutorial for SIGAda 2001 October 1, 2001 Bloomington, MN. Ben Brosgol. 79 Tobey Road Belmont, MA 02478 USA +1-617-489-4027 (Voice) +1-617-489-4009 (FAX) brosgol@gnat.com. Topics. Concurrency issues
E N D
Concurrency and Real-Time Programming Supportin Java™, Ada and POSIX Tutorial for SIGAda 2001 October 1, 2001 Bloomington, MN Ben Brosgol 79 Tobey Road Belmont, MA 02478USA +1-617-489-4027 (Voice) +1-617-489-4009 (FAX) brosgol@gnat.com
Topics • Concurrency issues • Basic model / lifetime • Mutual exclusion • Coordination / communication • Asynchrony • Interactions with exception handling • Real-time issues • Memory management / predictability • Scheduling and priorities (priority inversion avoidance) • Time / periodic activities • Java approach • Java language specification • Real-Time Specification for Java (Real-Time for Java Expert Group) • Core Extensions to Java (J-Consortium) • Ada 95 approach • Core language • Systems Programming and Real-Time Annexes • POSIX approach • Pthreads (1003.1c) • Real-time extensions (1003.1b) • For each issue, we present / compare the languages’ approaches
Concurrency Granularity / Terminology • “Platform” • Hardware + OS + language-specific run-time library • “Process” • Unit of concurrent execution on a platform • Communicates with other processes on the same platform or on different platforms • Communication / scheduling managed by the OS (same platform) or CORBA etc (different platforms) • Concurrency on a platform may be true parallelism (multi-processor) or multiplexed (uniprocessor) • Per-process resources include stack, memory, environment, file handles, ... • Switching/communication between processes is expensive • “Thread” (“Task”) • Unit of concurrent execution within a process • Communicates with other threads of same process • Shares per-process resources with other threads in the same process • Per-thread resources include PC, stack • Concurrency may be true parallelism or multiplexed • Communication / scheduling managed by the OS or by language-specific run-time library • Switching / communication between threads is cheap • Our focus: threads in a uniprocessor environment
Summary of Issues • Concurrency • Basic model / generality • Lifetime properties • Creation, initialization, (self) termination, waiting for others to terminate • Mutual exclusion • Mechanism for locking a shared resource, including control over blocking/awakening a task that needs the resource in a particular state • Coordination (synchronization) / communication • Asynchrony • Event / interrupt handling • Asynchronous Transfer of Control • Suspension / resumption / termination (of / by others) • Interactions with exception handling • Libraries and thread safety • Real-Time • Predictability (time, space) • Scheduling policies / priority • Range of priority values • Avoidance of “priority inversion” • Clock and time-related issues and services • Range/granularity, periodicity, timeout • Libraries and real-time programming
Overview of Java Concurrency Support (1) • Java Preliminaries • Smalltalk-based, dynamic, safety-sensitive OO language with built-in support for concurrency, exception handling • Dynamic data model • Aggregate data (arrays, class objects) on heap • Only primitive data and references on stack • Garbage Collection required • Two competing proposals for real-time extensions • Sun-sponsored Real-Time for Java Expert Group • J-Consortium • Basic concurrency model • Unit of concurrency is the thread • A thread is an instance of the class java.lang.Thread or one of its subclasses • run() method = algorithm performed by each instance of the class • Programmer either extends Thread, or implements the Runnable interface • Override/implement run() • All threads are dynamically allocated • If implementing Runnable, construct a Thread object passing a Runnable as parameter
Overview of Java Concurrency Support (2) • Example of simple thread • Lifetime properties • Constructing a thread creates the resources that the thread needs (stack, etc.) • “Activation” is explicit, by invoking start() • Started thread runs “concurrently” with parent • Thread terminates when its run method returns • Parent does not need to wait for children to terminate • Restrictions on “up-level references” from inner classes prevent dangling references to parent stack data public class Writerextends Thread{ final int count; public Writer(int count){this.count=count;} public void run(){ for (int i=1; i<=count; i++){ System.out.println("Hello " + i); } } public static void main( String[] args ) throws InterruptedException{ Writer w = new Writer(60); w.start(); // New thread of control invokes w.run()w.join(); // Wait for w to terminate } }
Overview of Java Concurrency Support (3) • Mutual exclusion • Shared data (volatile fields) • synchronized blocks/methods • Thread coordination/communication • Pass data to new thread via constructor • Pulsed event - wait() / notify() • Broadcast event - wait() / notifyAll() • join() suspends caller until the target thread completes • Asynchrony • interrupt() sets a bit that can be polled • Asynchronous termination • stop() is deprecated • destroy() is discouraged • suspend() / resume() have been deprecated • RTJEG, J-C proposals include event / interrupt handling, ATC, asynchronous termination • Interaction with exception handling • No asynchronous exceptions in “baseline Java” • Async exceptions for ATC in RTJEG, J-C • Various thread-related exceptions • Thread propagating an unhandled exception • Terminates, but first calls uncaughtException • Other functionality • Thread group, dæmon threads, thread local data
Overview of Ada Concurrency Support (1) • Ada 95 preliminaries • Pascal-based ISO Standard reliable OO language with built-in support for packages (modules), concurrency, exception handling, generic templates, ... • Traditional data model (“static” storage, stack(s), heap) • Aggregate data (arrays, records) go on the stack unless dynamically allocated • Implementation not required to supply Garbage Collection • “Specialized Needs Annexes” support systems programming, real-time, several other domains • Basic concurrency model • Unit of concurrency (thread) is the task • Task specification = interface to other tasks • Often simply just the task name • Task body = implementation (algorithm) • Comprises declarations, statements • Task type serves as a template for task objects performing the same algorithm • Tasks and task types are declarations and may appear in “global” packages or local scopes • Tasks follow normal block structure rules • Each task has own stack • Task body may refer (with care :-) to data in outer scopes, may declare inner tasks • Task objects may be declared or dynamically allocated
Overview of Ada Concurrency Support (2) • Example of declared task object • Lifetime properties • Declared task starts (is activated) implicitly at the begin of parent unit • Allocated task starts at the point of allocation • Task statements execute “concurrently” with statements of parent • Task completes when it reaches its end • “Master” is suspended when it reaches its end, until each of its dependent tasks terminates • Prevents dangling references to local data • No explicit mechanism (such as Java’s join()) to wait for another task to terminate with Ada.Text_IO;procedure Example1 isCount : Integer := 60; taskWriter; -- Specificationtask body Writeris-- Bodybeginfor I in 1..Count loop Ada.Text_IO.Put_Line( "Hello" & Integer'Image(I));delay 1.0; -- Suspend for at least 1.0 secondend loop;end Writer;begin -- Writer activatednull; -- Main procedure suspended until Writer terminatesend Example1;
Overview of Ada Concurrency Support (3) • Example of task type / dynamic allocation • Mutual exclusion • Shared data, pragma Volatile / Atomic • Protected objects / types • Data + “protected” operations that are executed with mutual exclusion • “Passive” task that sequentializes access to a data structure via explicit communication (rendezvous) • Explicit mutex-like mechanism (definable as protected object/type) that is locked and unlocked with Ada.Text_IO;procedure Example2 istask type Writer(Count : Natural); -- Specification type Writer_Ref is access Writer; Ref : Writer_Ref;task body Writeris-- Bodybeginfor I in 1..Count loop Ada.Text_IO.Put_Line( "Hello" & I'Img);delay 1.0; -- Suspend for at least 1.0 secondend loop;end Writer;begin Ref := new Writer(60); -- activates new Writer task object -- Main procedure suspended until Writer object terminatesend Example2;
Overview of Ada Concurrency Support (4) • Coordination / communication • Pass data to task via discriminant or rendezvous • Suspension_Object • Binary semaphore with 1-element “queue” • Rendezvous • Explicit inter-task communication • Implicit wait for dependent tasks • Asynchrony • Event handling via dedicated task, interrupt handler • Asynch interactions subject to “abort deferral” • abort statement • Asynchronous transfer of control via timeout or rendezvous request • Hold / Continue procedures (suspend / resume) • Interaction with exception handling • No asynchronous exceptions • Tasking_Error raised at language-defined points • Task propagating an (unhandled) exception terminates silently • Other functionality • Per-task attributes • Restrictions for high-integrity / efficiency-sensitive applications • Ravenscar Profile
Overview of POSIX Concurrency Support (1) • Basic concurrency model • A thread is identified by an instance of (opaque) type pthread_t • Threads may be allocated dynamically or declared locally (on the stack) or statically • Program creates / starts a thread by calling pthread_create, passing the addresses of the thread id, an “attributes” structure, the function that the thread will be executing, and the function’s arguments • Thread function takes and returns void* • Return value passed to “join”ing thread • Example • Notation: POSIX call in upper-case is a macro whose expansion includes querying the error return code #include <pthread.h> #include <stdio.h> void *tfunc(void *arg){ // thread function int count = *( (int*)arg ); int j; for (j=1; j <= count; j++){ printf("Hello %d\n", j); } return NULL; } int main(int argc, char *argv[]){ // main thread pthread_t pthread; int pthread_arg = 60; PTHREAD_CREATE( &pthread, NULL, tfunc, (void*)&pthread_arg); PTHREAD_JOIN( pthread, NULL ); }
Overview of POSIX Concurrency Support (2) • Lifetime properties • Thread starts executing its thread function as result of pthread_create, concurrent with creator • Termination • A thread terminates via a return statement or by invoking pthread_exit • Both deliver a result to a “join”ing thread, and both invoke cleanup handlers • A terminated thread may continue to hold system resources until it is recycled • Detachment and recycling • A thread is detachable if • It has been the target of a pthread_join or a pthread_detach (either before or after it has terminated), or • it was created with its detachstate attribute set • A terminated detachable thread is recycled, releasing all system resources not released at termination • No hierarchical relationship among threads • Created thread has a pointer into its creator’s memory danger of dangling reference • Main thread is special in that when it returns it terminates the process, killing all other threads • To avoid this mass transitive threadicide, main thread can pthread_exit rather than return
Overview of POSIX Concurrency Support (3) • Mutual exclusion • Mutexes (pthread_mutex_t type) with lock / unlock functions • Coordination / communication • Condition variables (pthread_cond_t type) with pulsed and broadcast events • Semaphores • Data passed to thread function at pthread_create, result delivered to “joining” thread at return or pthread_exit • Asynchrony • Thread cancellation with control over immediacy and ability to do cleanup • Interaction with exception handling • Complicated relationship with signals • Consistent error-return conventions • The result of each pthread function is an int error code (0 normal) • If the function needs to return a result, it does so in an address (“&”) parameter • No use of errno in Pthreads functions • Per-thread errno used when a thread calls a function that sets errno • Other • Thread-specific data area • “pthread once” functions
Comparison: Basic Model / Lifetime • Points of difference • Nature of unit of concurrency: class, task, function • Implicit (Ada, POSIX) versus explicit (Java) activation • How parameters are passed / how result communicated • Methodology / reliability • Ada and Java provide type checking, prevent dangling references • Flexibility / generality • All three provide roughly the same expressive power • POSIX allows a new thread to be given its parameters explicitly on thread creation • POSIX allows a thread to return a value to a “join”ing thread • Ada lacks an explicit mechanism for one task to wait for another task to terminate • In particular, waiting for an allocated task to terminate • Efficiency • Ada requires run-time support to manage task dependence hierarchy
Mutual Exclusion in Ada via Shared Data • Example: • One task repeatedly updates an integer value • Another task repeatedly displays it • Advantage • Efficiency • Need pragma Atomic to ensure that • Integer reads/writes are atomic • Optimizer does not cache Global • Drawbacks • Methodologically challenged • Does not scale up (e.g. aggregate data, more than one updating task) with Ada.Text_IO; procedure Example3 is Global : Integer := 0; pragma Atomic( Global ); task Updater; task Reporter; task body Updater is begin loop Global := Global+1; delay 1.0; -- 1 second end loop; end Updater; task body Reporter is begin loop Ada.Text_IO.Put_Line(Global'Img); delay 2.0; -- 2 seconds end loop; end Reporter; begin null; end Example3; Note: the assignment statement is not atomic
Mutual Exclusion in Java via Shared Data • Java version of previous example • Comments • Same advantages and disadvantages as Ada version • Need volatile to prevent hostile optimizations public class Example4{ static volatile int global = 0; public static void main(String[] args){ Updater u = new Updater(); Reporter r = new Reporter(); u.start(); r.start(); }} class Updater extends Thread{ public void run(){ while(true){ Example4.global++; ... sleep( 1000 ); ... // try block omitted } }} class Reporter extends Thread{ public void run(){ while(true){ System.out.println(Example4.global); } ... sleep( 2000 ); ... // try block omitted } }}
Mutual Exclusion in Ada via Protected Object with Ada.Integer_Text_IO; procedure Example5 is type Position is record X, Y : Integer := 0; end record; protected Global is procedure Update; function Valuereturn Position; private Data : Position; end Global; protected body Global is procedure Update is begin Data.X := Data.X+1; Data.Y := Data.Y+1; end Update; function Valuereturn Position is begin return Data; end Value; end Global; task Updater; task Reporter; task body Updater is begin loop Global.Update; delay 1.0; -- 1 second end loop; end Updater; task body Reporter is P : Position; begin loop P := Global.Value; Ada.Integer_Text_IO.Put (P.X); Ada.Integer_Text_IO.Put (P.Y); delay 2.0; -- 2 seconds end loop; end Reporter; begin null; end Example5; Interface Implementation Executed with mutual exclusion
Basic Properties of Ada Protected Objects • A protected object is a data object that is shared across multiple tasks but with mutually exclusive access via a (conceptual) “lock” • The rules support “CREW” access (Concurrent Read, Exclusive Write) • Form of a protected object declaration • Encapsulation is enforced • Client code can only access the protected components through protected operations • Protected operations illustrated in Example5 • Procedure may “read” or “write” the components • Function may “read” the components, not “write” them • The protected body provides the implementation of the protected operations • Comments on Example5 • Use of protected object ensures that only one of the two tasks at a time can be executing a protected operation • Scales up if we add more accessing tasks • Allows concurrent execution of reporter tasks Data may only be in the private part protectedObject_Nameis { protected_operation_specification ; }[ private { protected_component_declaration} ] endObject_Name;
global Updater x u pu y pr r Position Reporter Mutual Exclusion in Java viaSynchronized Blocks class Position{ int x=0, y=0; } public class Example6{ public static void main(String[] args){ Position global = new Position(); Updater u = new Updater( global ); Reporter r = new Reporter( global ); u.start(); r.start(); }} class Updater extends Thread{ private final Position pu; Updater( Position p ){ pu=p; } public void run(){ while(true){ synchronized(pu){ pu.x++; pu.y++; } ... sleep( 1000 ); ... } }} class Reporter extends Thread{ private final Position pr; Reporter( Position p ){ pr=p; } public void run(){ while(true){ synchronized(pr){ System.out.println(pr.x); System.out.println(pr.y); } ... sleep( 2000 ); ... } }}
Semantics of Synchronized Blocks • Each object has a lock • Suppose thread t executes synchronized(p){...} • In order to enter the {...} block, t must acquire the lock associated with the object referenced by p • If the object is currently unlocked, t acquires the lock and sets the lock count to 1, and then proceeds to execute the block • If t currently holds the lock on the object, t increments its lock count for the object by 1, and proceeds to execute the block • If another thread holds the lock on the object, t is “stalled” • Leaving a synchronized block (either normally or “abruptly”) • t decrements its lock count on the object by 1 • If the lock count is still positive, t proceeds in its execution • If the lock count is zero, the threads “locked out” of the object become eligible to run, and t stays eligible to run • But this is not an official scheduling point • If each thread brackets its accesses inside a synchronized block on the object, mutually exclusive accesses to the object are ensured • No need to specify volatile
global Updater x u pu y pr r Position Reporter Mutual Exclusion in Java viaSynchronized Methods class Position{ private int x=0, y=0; public synchronized void incr(){ x += 1; y += 1; } public synchronized int[] value(){ return new int[2]{x, y} } } public class Example7{ public static void main(String[] args){ Position global = new Position(); Updater u = new Updater( global ); Reporter r = new Reporter( global ); u.start(); r.start(); }} class Updater extends Thread{ private final Position pu; Updater( Position p ){ pu=p; } public void run(){ while(true){pu.incr(); ... sleep( 1000 ); ... } }} class Reporter extends Thread{ private final Position pr; Reporter( Position p ){ pr=p; } public void run(){ while(true){ int[] arr = pr.value(); System.out.println(arr[0]); System.out.println(arr[1]); ... sleep( 2000 ); ... } }}
Comments on Synchronized Blocks / Methods • Effect of synchronized instance method is as though body of method was in a synchronized(this) block • Generally better to use synchronized methods versus synchronized blocks • Centralizes mutual exclusion logic • For efficiency, have a non-synchronized method with synchronized(this) sections of code • Synchronized accesses to static fields • A synchronized block may synchronize on a class object • The “class literal” Foo.class returns a reference to the class object for class Foo • Typical style in a constructor that needs to access static fields • A static method may be declared as synchronized • Constructors are not specified as synchronized • Only one thread can be operating on a given object through a constructor • Invoking obj.wait() releases lock on obj • All other blocking methods (join(), sleep(), blocking I/O) do not release the lock class MyClass{ private static int count=0; MyClass(){synchronized(MyClass.class){ count++; } ... }}
Mutual Exclusion in POSIX via Mutex • A mutex is an instance of type pthread_mutex_t • Initialization determines whether a pthread can successfully lock a mutex it has already locked • PTHREAD_MUTEX_INITIALIZER (“fast mutex”) • Attempt to relock will fail • PTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP (“recursive mutex”) • Attempt to relock will succeed • Operations on a mutex • pthread_mutex_lock(&mutex) • Blocks caller if mutex locked • Deadlock condition indicated via error code • pthread_mutex_trylock(&mutex) • Does not block caller • pthread_mutex_unlock(&mutex) • Release waiting pthread • pthread_mutex_destroy(&mutex) • Release mutex resources • Can reuse mutex if reinitialize
Monitors • In most cases where mutual exclusion is required there is also a synchronization* constraint • A task performing an operation on the object needs to wait until the object is in a state for which the operation makes sense • Example: bounded buffer with Put and Get • Consumer calling Get must block if buffer is empty • Producer calling Put must block if buffer is full • The monitor is a classical concurrency mechanism that captures mutual exclusion + state synchronization • Encapsulation • State data is hidden, only accessible through operations exported from the monitor • Implementation must guarantee that at most one task is executing an operation on the monitor • Synchronization is via condition variables local to the monitor • Monitor operations invoke wait/signal on the condition variables • A task calling wait is unconditionally blocked (in a queue associated with that condition variable), releasing the monitor • A task calling signal awakens one task waiting for that variable and otherwise has no effect • Proposed/researched by Dijkstra, Brinch-Hansen, Hoare in late 1960s and early 1970s * “Synchronization” in the correct (versus Java) sense
Next_Out Max_Size 1 Data Count: 4 Next_In Snapshot of data structures after inserting 5 elements and removing 1 Monitor Example: Bounded Buffer monitor Buffer {Pascal-like syntax} export Put, Get, Size; const Max_Size = 10; var Data : array[1..Max_Size] ofWhatever; Next_In, Next_Out : 1..Max_Size; Count : 0..Max_Size; NonEmpty, NonFull : condition; procedure Put(Item : Whatever); begin if Count=Max_Size then Wait( NonFull ); Data[Next_In] := Item; Next_In := Next_In mod Max_Size + 1; Count := Count + 1; Signal( NonEmpty ); end {Put}; procedure Get(Item : varWhatever); begin if Count=0 then Wait( NonEmpty ); Item := Data[Next_Out]; Next_Out := Next_Out mod Max_Size + 1; Count := Count - 1; Signal( NonFull ); end {Get}; function Size : Integer; begin Size := Count; end {Size}; begin Count := 0; Next_In := 1; Next_Out := 1; end {Buffer};
Monitor Critique • Semantic issues • If several tasks waiting for a condition variable, which one is unblocked by a signal? • Longest-waiting, highest priority, unspecified, ... • Which task (signaler or unblocked waiter) holds the monitor after a signal • Signaler? • Unblocked waiter? • Then when does signaler regain the monitor • Require signal either to implicitly return or to be the last statement? • Depending on semantics, may need while vs if in the code that checks the wait condition • Advantages • Encapsulation • Efficient implementation • Avoids some race conditions • Disadvantages • Sacrifices potential concurrency • Operations that don’t affect the monitor’s state (e.g. Size) still require mutual exclusion • Condition variables are low-level / error-prone • Programmer must ensure that monitor is in a consistent state when wait/signal are called • Nesting monitor calls can deadlock, even without using condition variables
Monitors and Java • Every object is a monitor in some sense • Each object obj has a mutual exclusion lock, and certain code is executed under control of that lock • Blocks that are synchronized on obj • Instance methods on obj’s class that are declared as synchronized • Static synchronized methods for obj if obj is a class • But encapsulation depends on programmer style • Non-synchronized methods, and accesses to non-private data from client code, are not subject to mutual exclusion • No special facility for condition variables • Any object (generally the one being accessed by synchronized code) can be used as a condition variable via wait() / notify() • But that means that there is only one condition directly associated with the object • To invoke wait() or notify() on an object, the calling thread needs to hold the lock on the object • Otherwise throws a run-time exception • The notifying thread does not release the lock • Waiting threads thus generally need to do their wait in a while statement versus a simple if • No guarantee which waiting thread is awakened by a notify
Bounded Buffer in Java • Notes • Essential for each wait() condition to be in a while loop and not simply an if statement • Important to signal via notifyAll() versus simply notify() • A producer and a consumer thread may be in the object’s wait set at the same time! public class BoundedBuffer{ public static final int maxSize=10; private final Object[] data = new Object[maxSize]; private int nextIn=0, nextOut=0; private volatile int count=0; public synchronized void put(Object item) throws InterruptedException{ while (count == max) { this.wait(); } data[nextIn] = item; nextIn = (nextIn + 1) % max; count++; this.notifyAll(); } public synchronized Object get() throws InterruptedException{ while (count == 0) { this.wait(); } Object result = data[nextOut]; data[nextOut] = null; nextOut = (nextOut + 1) % max; count--; this.notifyAll(); return result; } public int size(){ // not synchronized return count; }
Monitors and Ada Protected Objects • Encapsulation enforced in both • Data components are inaccessible to clients • Mutual exclusion enforced in both • All accesses are via protected operations, which are executed with mutual exclusion (“CREW”) • Condition variables • A protected entry is a protected operation guarded by a boolean condition (“barrier”) which, if false, blocks the calling task • Barrier condition can safely reference the components of the protected object and also the “Count attribute” • E'Count = number of tasks queued on entry E • Value does not change while a protected operation is in progress (avoids race condition) • Barrier expressions are Ada analog of condition variables, but higher level (wait and signal implicit) • Caller waits if the barrier is False (and releases the lock on the object) • Barrier conditions for non-empty queues are evaluated at the end of protected procedures and protected entries • If any are True, queuing policy establishes which task is made ready • Protected operations (unlike monitor operations) are non-blocking • Allows efficient implementation of “lock”
Bounded Buffer in Ada package Bounded_Buffer_Pkg is Max_Length : constant := 10; type W_Array is array(1 .. Max_Length) ofWhatever; protected Bounded_Buffer is entryPut( Item : inWhatever ); entryGet( Item : outWhatever ); functionSize; private Next_In, Next_Out : Positive := 1; Count : Natural := 0; Data : W_Array; end Bounded_Buffer; end Bounded_Buffer_Pkg; package body Bounded_Buffer_Pkg is protected body Bounded_Buffer is entryPut( Item : inWhatever ) when Count < Max_Lengthis begin Data(Next_In) := Item; Next_In := Next_In mod Max_Length + 1; Count := Count+1; end Put; entryGet( Item : outWhatever ) when Count > 0is begin Item := Data(Next_Out); Next_Out := Next_Out mod Max_Length + 1; Count := Count-1; end Get; functionSizeis begin return Count; end Size; end Bounded_Buffer; end Bounded_Buffer_Pkg; Evaluate barriers
Monitors and POSIX:Mutex + Condition Variables • POSIX supplies type pthread_cond_t for condition variables • Always used in conjunction with a mutex • Avoids race conditions such as a thread calling wait and missing a signal that is issued before the thread is enqueued • May be used to simulate a monitor, or simply as an inter-thread coordination mechanism • Initialized via PTHREAD_COND_INITIALIZER or via pthread_cond_init function • Operations • Waiting operations • pthread_cond_wait( &cond_vbl, &mutex ) • pthread_cond_timedwait(&cond_vbl, &mutex, &timeout) • Signaling operations • pthread_cond_signal( &cond_vbl ) • Pulsed event • No guarantee which waiter is awakened • pthread_cond_broadcast (&cond_vbl ) • Broadcast event • All waiters awakened • Initialization • pthread_cond_init( &cond_val ) • Resource release • pthread_cond_destroy( &cond_vbl )
Bounded Buffer in POSIX (*) #include <pthread.h> #define MAX_LENGTH 10 #define WHATEVER float typedef struct{ pthread_mutex_t mutex; pthread_cond_t non_full; pthread_cond_t non_empty; int next_in, next_out, count; WHATEVER data[MAX_LENGTH]; } bounded_buffer_t; void put( WHATEVER item, bounded_buffer_t *b ){ PTHREAD_MUTEX_LOCK(&(b->mutex)); while (b->count == MAX_LENGTH){ PTHREAD_COND_WAIT(&(b->non_full), &(b->mutex)); } ... /* Put data in buffer, update count and next_in */ PTHREAD_COND_SIGNAL(&(b->non_empty)); PTHREAD_MUTEX_UNLOCK(&(b->mutex)); } void get( WHATEVER *item, bounded_buffer_t *b ){ PTHREAD_MUTEX_LOCK(&(b->mutex)); while (b->count == 0){ PTHREAD_COND_WAIT(&(b->non_empty), &(b->mutex)); } ... /* Get data from buffer, update count and next_out */ PTHREAD_COND_SIGNAL(&(b->non_full)); PTHREAD_MUTEX_UNLOCK(&(b->mutex)); } int size( bounded_buffer_t *b ){ int n; PTHREAD_MUTEX_LOCK(&(b->mutex)); n = b->count; PTHREAD_MUTEX_UNLOCK(&(b->mutex)); return n;} /* Initializer function also required (*) Based on example in Burns & Wellings, Real-Time Systems and Programming Languages, pp. 253-254
Comparison of Mutual Exclusion Approaches • Points of difference • Expression of mutual exclusion in program • Explicit code markers in POSIX (lock/unlock mutex) • Either explicit code marker (synchronized block) or encapsulated (synchronized method) in Java • Encapsulated (protected object) in Ada • No explicit condition variables in Java (or Ada) • Blocking prohibited in protected operations (Ada) • Locks are implicitly recursive in Java and Ada, programmer decides whether “fast” or recursive in POSIX • Methodology / reliability • All provide necessary mutual exclusion • Ada entry barrier is higher level than condition variable • Absence of condition variable from Java can lead to clumsy or obscure style • Main reliability issue is interaction between mutual exclusion and asynchrony, described below • Flexibility / generality • Ada: protected operations need to be non-blocking • Efficiency • Ada provides potential for concurrent reads • Ada does not require queue management, but barrier (re)evaluation entails overhead
Coordination / Communication Mechanisms • Pulsed Event • Waiter blocks unconditionally • Signaler awakens exactly one waiter (if one or more), otherwise event is discarded • Broadcast Event • Waiter blocks unconditionally • Signaler awakens all waiters (if one or more), otherwise event is discarded • Persistent Event (Binary Semaphore) • Signaler allows one and only one task to proceed past a wait • Some task that already has, or the next task that subsequently will, call wait • Counting semaphore • A generalization of binary semaphore, where the number of occurrences of signal are remembered • Simple 2-task synchronization • Persistent event with a one-element queue • Direct inter-task synchronous communication • Rendezvous, where the task that initiates the communication waits until its partner is ready
Pulsed Event • Java • Any object can serve as a pulsed event via wait() / notify() • Calls on these methods must be in code synchronized on the object • wait() releases the lock, notify() doesn’t • wait() can throw InterruptedException • An overloaded version of wait() can time out, but no direct way to know whether the return was normal or via timeout • Ada • Protected object / type can model a pulsed event • Can time out on any entry via select statement • Can’t awaken a blocked task other than via abort • POSIX • Condition variable can serve as pulsed event protected type Pulsed_Eventis entry Wait; procedure Signal; private Signaled : Boolean := False; end Pulsed_Event; protected body Pulsed_Eventis entry Wait when Signaled is begin Signaled := False; end Wait; procedure Signal is begin Signaled := Wait'Count>0; end Signal; end Pulsed_Event;
Broadcast Event • Java • Any object can serve as a broadcast event via wait() / notifyAll() • Calls on these methods must be in code synchronized on the object • Ada • Protected object / type can model a broadcast event • Protected object can model more general forms, such as sending data with the signal, to be retrieved by each awakened task • Locking protocol / barrier evaluation rules prevent race conditions • POSIX • Condition variable can serve as broadcast event protected type Broadcast_Eventis entry Wait; procedure Signal; private Signaled : Boolean := False; end Broadcast_Event; protected body Broadcast_Eventis entry Wait when Signaled is begin Signaled := Wait'Count>0; end Wait; procedure Signal is begin Signaled := Wait'Count>0; end Signal; end Broadcast_Event;
Semaphores (Persistent Event) • Binary semaphore expressible in Java • J-Consortium spec includes binary and counting semaphores • Binary semaphore expressible in Ada • POSIX • Includes (counting) semaphores, but intended for inter-process rather than inter-thread coordination public class BinarySemaphore{ private boolean signaled = false; public synchronized void await() throws InterruptedException{ while (!signaled) { this.wait(); } signaled=false; } public synchronized void signal(){ signaled=true; this.notify(); }} protected typeBinary_Semaphoreis entry Wait; procedure Signal; private Signaled : Boolean := False; end Binary_Semaphore; protected bodyBinary_Semaphoreis entry Wait when Signaled is begin Signaled := False; end Wait; procedure Signal is begin Signaled := True; end Signal; end Binary_Semaphore;
Simple Two-Task Synchronization • Java, POSIX • No built-in support • Ada • Type Suspension_Object in package Ada.Synchronous_Task_Control • Procedure Suspend_Until_True(SO) blocks caller until SO becomes true, and then atomically resets SO to false • Procedure Set_True(SO) sets SO’s state to true • “Bounded error” if a task calls Suspend_Until_True(SO) while another task is waiting for SO procedure Proc is task Setter; task Retriever; SO : Suspension_Object; Data : array (1..1000) of Float; task body Setter is begin ... -- Initialize Data Set_True(SO); ... end Setter; task body Retriever is begin Suspend_Until_True(SO); ... -- Use data end Setter; begin null; end Proc;
Rendezvous Direct Synchronous Inter-Task Communication (1) • Calling task (caller) • Requests action from another task (the callee), and blocks until callee is ready to perform the action • Called task (callee) • Indicates readiness to accept a request from a caller, and blocks until a request arrives • Rendezvous • Performance of the requested action by callee, on behalf of a caller • Parameters may be passed in either or both directions • Both caller and callee are unblocked after rendezvous completes • Java • No direct support • Can model via wait / notify, but complicated • POSIX • Same comments as for Java T1 T2 • “T2, do action A” • Wait for T2 to start action A • (T2 does action A) • Wait for T2 to complete action A • “Accept request for action A [from T1]” • Wait for request for action A [from T1] • Do action A • Awaken caller
Direct Synchronous Inter-Task Communication (2) • Ada • “Action” is referred to as a task’s entry • Declared in the task’s specification • Caller makes entry call, similar syntactically to a procedure call • Callee accepts entry call via an accept statement • Caller identifies callee but not vice versa • Many callers may call the same entry, requiring a queue • Often callee is a “server” that sequentializes access to a shared resource • Sometimes protected object is not sufficient, e.g. if action may block • In most cases the server can perform any of several actions, and the syntax needs to reflect this flexibility • Also in most cases the server is written as an infinite loop (not known in advance how many requests will be made) so termination is an issue • Ada provides special syntax for a server to automatically terminate when no further communication with it is possible • Caller and/or callee may time out • Timeout canceled at start of rendezvous
Direct Synchronous Inter-Task Communication (3) • Ada example task Sequentialized_Output is entry Put_Line( Item : String ); entry Put( Item : String ); end Sequentialized_Output; task body Sequentialized_Output is begin loop select accept Put_Line( Item : String ) do Ada.Text_IO.Put_Line( Item ); end Put_Line; or accept Put( Item : String ) do Ada.Text_IO.Put( Item ); end Put; or terminate; end select; end loop; end Sequentialized_Output; task Outputter1; task body Outputter1 is begin; ... Sequentialized_OutPut. Put("Hello"); ... end Outputter1; task Outputter2; task body Outputter2 is begin; ... Sequentialized_Output. Put("Bonjour"); ... end Outputter2;
Comparison of Coordination/Communication Mechanisms • Points of difference • Different choice of “building blocks” • Ada: Suspension_Object, protected object, rendezvous • Java, POSIX: pulsed/broadcast events • Java allows “interruption” of blocked thread by throwing an exception, Ada and POSIX allow only cancellation • Methodology / reliability • Ada’s high-level feature (rendezvous) supports good practice • Potential for undetected bug in Ada if a task calls Suspend_Until_True on a Suspension_Object that already has a waiting task • Flexibility / generality • Major difference among the languages is that Ada is the only one to provide rendezvous as built-in communication mechanism • Efficiency • No major differences in implementation efficiency for mechanisms common to the three approaches • Ada’s Suspension_Object has potential for greater efficiency than semaphores
Asynchrony Mechanisms • Setting/Polling • Setting a datum in a task/thread that is polled by the affected task/thread • Asynchronous Event Handling • Responding to asynchronous events generated internally (by application threads) or externally (by interrupts) • Resumptive: “interrupted” thread continues at the point of interruption, after the handler completes • Combine with polling or ATC to affect the interrupted thread • Asynchronous Termination • Aborting a task/thread • Immediacy: are there regions in which a task / thread defers requests for it to be aborted? • ATC • Causing a task to branch based on an asynchronous occurrence • Immediacy: are there regions in which a task / thread defers requests for it to have an ATC? • Suspend/resume • Causing a thread to suspend its execution, and later causing the thread to be resumed • Immediacy: are there regions in which a task / thread defers requests for it to be suspended?
Setting / Polling • Not exactly asynchronous (since the affected task/thread checks synchronously) • But often useful and arguably better than asynchronous techniques • Ada • No built-in mechanism, but can simulate via protected object or pragma Atomic variable global to setter and poller • Java • t.interrupt() sets interruption status flag in the target thread t • Static Thread method boolean interrupted() returns current thread’s interruption status flag and resets it • Boolean method t.isInterrupted() returns target thread’s interruption status flag • If t.interrupt() is invoked on a blocked thread t, t is awakened and an InterruptedException (a checked exception) is thrown • Each of the methods thr.join(), Thread.sleep(), and obj.wait() has a “throws InterruptedException” clause • POSIX • No built-in mechanism
Asynchronous Event Handling • Ada • No specific mechanism for asynch event handling • Interrupt handlers can be modeled by specially identified protected procedures, executed (at least conceptually) by the hardware • Other asynch event handlers modeled by tasks • Java (RTSJ) • Classes AsyncEvent (“AE”), AsyncEventHandler (“AEH”) model asynchronous events, and handlers for such events, respectively • Programmer overrides one of the AEH methods to define the handler’s action • Program can register one or more AEHs with any AE (listener model) • An AEH is a schedulable entity, like a thread (but not necessarily a dedicated thread) • When an AE is fired, all registered handlers are scheduled based on their scheduling parameters • Program needs to manage any data queuing • Methods allow dealing with event bursts • Scales up to large number of events, handlers • J-Consortium proposal has analogous mechanism • POSIX • Messy interaction between signals (originally a process-based mechanism) and threads
Asynchronous Termination (1) • Ada • Abort statement sets the aborted task’s state to abnormal, but this does not necessarily terminate the aborted task immediately • For safety, certain contexts are abort-deferred; e.g. • Accept statements • Protected operations • Real-Time Annex requires implementation to terminate an abnormal task as soon as it is outside an abort-deferred region • Java Language Spec • No notion of abort-deferred region • Invoke t.stop(Throwable exc) or t.stop() • Halt t asynchronously, and throw exc or ThreadDeath object in t • Then effect is as though propagating an unchecked exception • Deprecated (data may be left in an inconsistent state if t stopped while in synchronized code) • Invoke t.destroy() • Halt t, with no cleanup and no release of locks • Not (yet :-) deprecated but can lead to deadlock • Invoke System.exit(int status) • Terminates the JVM • By convention, nonzero status abnormal termination
Asynchronous Termination (2) • Java Language Spec (cont’d.) • Recommended style is to use interrupt() • Main issue is latency • RTSJ • Synchronized code, and methods that do not explicitly have a throws clause for AIE, are abort deferred • To abort a thread, invoke t.interrupt() and have t do its processing in an asynchronously interruptible method class Boss extends Thread{ Thread slave; Boss(Thread slave){ this.slave=slave; } public void run(){ ... if (...){slave.interrupt(); // abort slave} ... } } class PollingSlave extends Thread{ public void run(){ while (!Thread.interrupted()){ ... // main processing } ... // pre-shutdown actions } }
Asynchronous Termination (3) • J-Consortium • abort() method aborts a thread • Synchronized code is not necessarily abort-deferred • May need to terminate a deadlocked thread that is in synchronized code • Synchronized code in objects that implement the Atomic interface is abort deferred • POSIX • A pthread can set its cancellation state (enabled or disabled) and, if enabled, its cancellation type (asynchronous or deferred) • pthread_set_cancelstate(newstate, &oldstate) • PTHREAD_CANCEL_DISABLE • PTHREAD_CANCEL_ENABLE • pthread_set_canceltype(newtype, &oldtype) • PTHREAD_CANCEL_ASYNCHRONOUS • PTHREAD_CANCEL_DEFERRED • Default setting: enabled, deferred cancellation • Deferred cancel at next cancellation point • Minimal set of cancellation points defined by standard, others can be added by implementation • pthread_cancel( &pthr ) sends cancellation request • Cleanup handlers give the cancelled thread the opportunity to consistentize data, unlock mutexes
Asynchronous Transfer of Control (“ATC”) • What is it • A mechanism whereby a triggering thread (possibly an async event handler) can cause a target thread to branch unconditionally, without any explicit action from the target thread • Controversial facility • Triggering thread does not know what state the target thread is in when the ATC is initiated • Target thread must be coded carefully in presence of ATC • Implementation cost / complexity • Interaction with synchronized code • Why provide support? • User community requirement • Useful for certain idioms • Time out of long computation when partial result is acceptable • Abort an iteration of a loop • Terminate a thread • ATC may have shorter latency than polling