1.15k likes | 1.17k Views
Gurdip Singh Masaaki Mizuno Kansas State University. Automatic Derivation and Verification of Synchronization Aspects in Object Oriented Systems. This research was in supported in part by DARPA PCES Order K203/AFRL #F33615-00-C-3044 and NSF CRCD #EIA-9980321. Why aspect oriented Programming?.
E N D
Gurdip Singh Masaaki Mizuno Kansas State University Automatic Derivation and Verification of Synchronization Aspects in Object Oriented Systems This research was in supported in part by DARPA PCES Order K203/AFRL #F33615-00-C-3044 and NSF CRCD #EIA-9980321
Why aspect oriented Programming? • “Code tangling”: • * code for a requirement is spread • through many classes; • * cross-cutting • * code for different requirements • are tangled • Programs are hard to maintain • * modifying code for a specific requirement is • not easy • * redundancy
Aspect-Oriented Programming • Solution: • * separate software into functional code and • aspect code * Develop functional code independently * Develop code for each aspect separately * Weavefunctional code with aspect code
Functional code Code for aspect 1 Code for aspect 2 AOP Ordinary Program Better structured program
Aspect-Oriented Programming • Solution: • * separate software into functional code and • aspect code * Develop functional code independently * Develop code for each aspect separately * Weavefunctional code with aspect code - Untangles code, eliminates redundancy - Code is easy to maintain and modify - Customized versions of software
Contents • Why aspect oriented methodology ? • Synchronization * Is it an aspect ? * Why should we treat it as an aspect ? • An aspect oriented methodology * Andrew’s Global Invariant Approach * UML-based methodology * Translation for Java shared memory programs * Pattern-based approach * Examples
Contents (contd) • SyncGen Demonstration • More Back-end translations: Lock-free translations, CAN • Synchronization in component-oriented embedded software • Bottom-up methodologies * COOL * Composition Filters * Synchronizers * Superimposition • Wrap-up
Synchronization • The importance of concurrent programming has increased • However, many programmers and designers are not appropriately trained to write correct and efficient concurrent programs • Most OS textbooks teach ad-hoc techniques • Showing solutions in low-level synchronization primitives for a small number of well-known problems, such as readers/writers and dining philosophers problems • Such solutions do not generalize to complex real-life synchronization problems in various primitives
Reader: mx.p( ); rc := rc + 1; if rc = 1 then wrt.p( ); mx.v( ); reading mx.p( ); rc := rc – 1; if rc = 0 then wrt.v( ); mx.v( ); Writer: wrt.p( ); writing wrt.v( ); W W R W Readers/Writers with weak readers preference wrt R R R mx
Reader mx3.p( ); r.p( ); mx1.p( ); rc := rc + 1; if rc = 1 then w.p( ); mx1.v( ); r.v( ); mx3.v( ); reading mx1.p( ); rc := rc – 1; if rc = 0 then w.v( ); mx1.v( ); Writer mx2.p( ); nw := nw + 1; if nw = 1 then r.p( ); mx2.v( ); w.p( ); writing w.v( ); mx2.p( ); nw := nw – 1; if nw = 0 then r.v( ); mx2.v( ); Readers/writers with writers preference Courtois et.al, CACM 1971
The goals of our approach • We need a more formal and structured approach to develop concurrent programs • The methodology must be easy to use • The resulting code must be efficient • Our methodology is • based on global invariant approach, • in the aspect oriented programming paradigm • in the context of scenario based development methodologies (such as Rational Unified Process (RUP))
Outline (part1) • Andrews’ Global Invariant (GI) approach • Aspect-oriented programming • Rational Unified Process (RUP) • Our concurrent program development methodology • Translations to find-grained synchronization code • Java synchronized blocks • Thread-pool model • Pattern-based approach • Basic synchronization patterns and solution invariants • Composition of patterns • Applications of patterns • Evaluation and Conclusion
Greg Andrew’s Global Invariant (GI) approach • Specify a safety property of synchronization using a global invariant (GI) e.g., readers/writers problem Let nr be the number of readers currently reading Let nw be the number of writers currently writing Then, the GI may be
< await > <awaitB S > Mechanically derive a coarse-grained solution from the invariant Coarse grained solution links GI to the program in high-level synchronization construct • <S > <await TRUE e.g., nr must be incremented when a thread starts reading and decremented when a thread completes reading. reading
Mechanically translate the coarse-grained solution to various fine-grained synchronization code in • Semaphores, • Monitors, • Active Monitors (message passing), • Java Synchronized Blocks, • Thread-Pool Model, etc. Each translation is guaranteed to preserve GI; therefore, the resulting code satisfies the safety property
lock g( ) f( ) c1.wait( ); c1.signal( ); … c2.wait( ); c1 c2 x, y, z e.g., Monitor Monitor = class + mutex + condition
< Sj > while not BidoCi.wait( ); Sj Si; Add Ck.signal( ) or Ck.signalAll( ) if execution of Si can potentially change the value of Bk toTRUE e.g., Monitor < await
Aspect Oriented Programming • Properties to be implemented are classified into • Component which can be clearly encapsulated in a generalized procedure (i.e., object, method, procedure, etc) • Aspect which cannot be clearly encapsulated in a general procedure (i.e., synchronization, distribution, etc) This approach reduces complexity and code-tangling by separating cross-cutting concerns. • We use • scenario based development methodology (i.e., RUP) for component code development • GI approach for synchronization aspect code development
Classes Use-case realizations Actors Use-cases Component code Rational Unified Process Use-case model (What) Analysis/design models (How) implementation
Rational Unified Process (RUP) Use-case realizations (scenarios) Classes/objects
Automatic Call Distribution System Need to wait
Threads • A scenario is translated into a sequence of method calls (sequential program) • One program for the operator scenario • Another for the external call scenario • Instantiate one thread for each operator phone and external line An advantage of concurrent programs: each activity can be written as a sequential program
Synchronization regions and clusters • Synchronization region in use-case realization: • a thread waits for some event to occur or some state to hold • a thread may trigger an event or change a state for which a thread in some synchronization region is waiting • Cluster: Partition of synchronization regions based on their reference relations
Rational Unified Process with synchronization aspect code development wakeup wait wait wakeup Identifying synchronization regions and clusters
RUP Actors Use-cases Component code Classes Scenarios Specify a global invariant for each cluster In scenarios, identify synchronization regions in which synchronization is required Complete code Global invariants (patterns) Coarse-grained solution Fine-grained code Synchronization aspect code development Our development methodology
When an external call arrives Record the arrival time Wait until an operator becomes ready {connected to an operator} record information for log When external call terminates {external call has terminated} wait until the operator hangs up When an operator becomes free Wait until an external call arrives {connected to an external call} record the connection time and operator’s ID When the operator hangs up {Operator has hung up} wait until the external call terminates {external call has terminated} record the call termination time and log the record Scenarios
When an external call arrives Record the arrival time Wait until an operator becomes ready {connected to an operator} record information for log When external call terminates {external call has terminated} wait until the operator hangs up When an operator becomes free Wait until an external call arrives {connected to an external call} record the connection time and operator’s ID When the operator hangs up {Operator has hung up} wait until the external call terminates {external call has terminated} record the call termination time and log the record Scenarios
When an operator becomes free Call to the generated function {connected to an external call} record the connection time and operator’s ID When the operator hangs up Call to the generated function {external call has terminated} record the call termination time and log the record When an external call arrives Record the arrival time Call to the generated function {connected to an operator} record information for log When external call terminates Call to the generated function Scenarios
Code Weaving Process Component code a.fun( ); y := y + 3; . . M.eneter( ); buf [head] := x; head := (head+1)%N . . M.exit( ); . . x := x + 2; m.g( ); scenario Aspect code in instance M Region R enter( ); exit( );
Anonymous Synchronization: among anonymous threads (i.e., any thread can synchronize with any other threads in the cluster) • A single instance of fine-grained solution (synchronization code) is created and all the threads execute in the instance • Specific Synchronization: among a set of specific threads (called a group) • multiple instances of the fine-grained solution are created, one for each group; all threads in the same group use the instance assigned to it. Anonymous and Specific Synchronization …. anonymous group2 group1 groupN
a. synchronized(obj) { critical section. may call obj.wait( ), obj.notify( ), obj.notifyAll( ) } b. type method(…) { synchronized (this) { body of method } } is equivalent to synchronized type method(…) { body of method } Translation to Java Synchronized Blocks • Review of Java Synchronization Primitives • Object has a lock and one condition variable • All threads in the same cluster sleep in the same condition variable
Java monitor Specific Notification (Cargill, 1996) synchronized methods lock lock notify( ); c1.notify( ); wait( ); c1.wait( ); notifyAll( ); c2.wait( ); c1 Condition Condition c2 Condition
Our Translation to Java Synchronized Blocks • For each cluster, define one java class • For < awaitBi→ S i> • private Object oi = new Object( ); • pubic voidmethodi( ) { • synchronized (o i) { • while (! checkBi( )) • try {oi.wait( ); } catch (InterruptedException e) { } • } • } • private synchronized BooleancheckB i( ) { • if (B i) { S i; return true; } • else return false; • }
For <Sj> • public voidmethod j( ) { • synchronized (this) { Sj; } • } • If execution of Si (or Sj) potentially change some Bk to ture, add • synchronized (ok) {ok.notify( );}, or • synchronized (ok) {ok.notifyAll( ); } • Correctness: • Bi; Si;and Sj are all executed in synchronized (this) blocks • Nesting level of synchronized blocks is at most two; the order is always block on ok being outside and block on this inside. • oi.wait( ); is executed within a sole synchronized block on oi • Checking Biand execution of oi.wait( ) are protected inside synchronized block on oi No missed notification.
Translation to Thread-Pool Model • Thread-Pool model • It is widely used in web-servers and embedded systems • The system maintains a pool (pools) of job threads. When a request arrives, it is passed to a free job thread in the pool • When an execution needs to be blocked, synchronization code releases and returns the executing thread to the thread pool, rather than blocking the thread. • This is not a context switch, rather a return from a function and a new call. • It is easy to remove blocked jobs from the system since no threads are associated with the jobs
GenericJob execute( ); Job objects and thread pools • A job object inherits GenericJob and implements execute( ) • ThreadPool maintains jobQueue (queue of GenericJob) and provides the two operations • voidaddJob(GenericJob job) {<jobQueue.enqueue(job)>} • GenericJobgetJob( ) { • <await(not jobQueue.isEmpty( ))→return jobQueue.dequeu( )> • } • Job threads execute the following code • whlie(true){(threadPool.getJob( )).execute( );} jobQueue blocked threads at <await>
Declare one monitor for each cluster. • For each < awaitBi→Si >, • declare queuei (queue of GenericJob) • declare gMethodi • booleangMethodi(GenericJob job) { • if (notBi) {queuei.enqueue(job); return false; } • Si; return true; • } • For each < Sj >, • declare ngMethodj • voidngMethodj ( ) { Sj; } • Add singalk() after Si wherever necessary • voidsignalk( ) { • while ((notqueuek.isEmpty( )) ∧Bi) { • Si; threadPool.addJob(queuei.dequeue( )); • } • } Translation Algorithm
Job objects • Inherit (or implement) GenericJob and override execute( ) • Devide the scenario into phases by each appearance of • <await> statement • Body of execute( ) is • switch (phase) { • case 0: • sequential code derived for phase 0 section of the scenario • if (notsynchObject.gMethod1(this))_{phase = 1; return;} • case 1: • …. • }
Linux running on MHz Xeon processors Performance Evaluation (Jacobi Iteration 1000x1000)
Synchronization patterns • One possible drawback of GI approach is the difficulty to identify an appropriate global invariant that correctly and accurately implies the safety property of the synchronization requirement. • To cope with this problem, we have developed a set of useful synchronization patterns and their solution invariants.
Region R • Bound(R, n): at most n threads can be in R InR OutR
… R1 R2 Rn • K_MuTex(R1, R2, …, Rn, k): at most k threads can be in (any) regions
Comb(3,2) = {(1,2), (1,3), (2,3)} e.g., … R1 R2 Rn • Exclusion(R1, R2, …, Rn): threads can be in at most one region out of n regions
RP , NP n RC , NC • Resource((RP , NP), (RC, NC), n): Initially, there are n resource items in the resource pool. • When a thread executes RP , it produces NP resource items. • When a thread executes RC, it consumes NC items. If there are less than NCitems in the pool, a thread trying to enter RCwaits until there are at least NC items in the pool
e.g., … R2, N2 R1, N1 Rn, Nn • Barrier((R1,N1), (R2,N2), …, (Rn,Nn)): all Ni threads in Ri (1 i n) meet, form a group, and leave the respective regions together
e.g., R1, 2 R2, 3 • AsymBarrier: Asymmetric version of Barrier, where entries of threads to a set of regions trigger departures of threads from a set of regions
Bound(Rw, 1) ∧Exclusion(Rw, RR) Composition of patterns 1. Composition of invariants • Readers/Writers problem • Producers/Consumers with n-bounded buffer Resource((RP ,1),(RC ,1),0)∧Resource((RC ,1),(RP ,1),n)∧ Bound(RP ,1)∧Bound(RC ,1)
Search, insert, and delete: Three kinds of threads share access to a singly linked list: searchers, inserters, and deleters • Searchers can execute concurrently with other searchers • Inserters must be mutually exclusive with other inserters • One inserter can proceed in parallel with multiple searchers • At most one deleter can access the list and must be mutually exclusive with searchers and inserters • Bound(RI ,1) ∧Bound(RD ,1) ∧Exclusion(RD , RS) ∧ • Exclusion(RD , RI)
2. Composition of sub-clusters • Readers/Writers problem Bound Bound RR RW RW RR Exclusion Exclusion (1) (2)
Information Exchanging Barrier • ((R1 , N1), (R2 , N2), …(RN , NN) Bound (R1 , N1) Bound (R2 , N2) Bound (RN , NN) write write ….. write Barrier (R1 , N1) (R2 , N2) (RN , NN) ….. read read read Barrier (R1 , N1) (R2 , N2) (RN , NN)