1 / 9

E81 CSE 532S: Advanced Multi-Paradigm Software Development

E81 CSE 532S: Advanced Multi-Paradigm Software Development. Half-Sync/Half-Async (HSHA) and Leader/Followers (LF) Patterns. Venkita Subramonian, Chris Gill, Nick Haddad, and Steve Donahue Department of Computer Science and Engineering Washington University, St. Louis cdgill@cse.wustl.edu.

lovie
Download Presentation

E81 CSE 532S: Advanced Multi-Paradigm Software Development

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. E81 CSE 532S: Advanced Multi-Paradigm Software Development Half-Sync/Half-Async (HSHA) and Leader/Followers (LF) Patterns Venkita Subramonian, Chris Gill, Nick Haddad, and Steve Donahue Department of Computer Science and Engineering Washington University, St. Louis cdgill@cse.wustl.edu

  2. HSHA and LF Patterns • Both are (architectural) concurrency patterns • Both decouple asynchronous, synchronous processing • Synchronous layer may reduce programming complexity and overhead (e.g., fewer context switches, cache misses) • Asynchronous layer improves responsiveness to events • Key differences • HSHA dedicates a thread to handle input, others to do work • LF lets threads take turns rotating between those roles

  3. HSHA and LF Context • A concurrent system with asynchronous events • Performs both synchronous and asynchronous services • Synchronous: copy data from one container to another • Asynchronous: socket or input stream reads and writes • Both kinds of services must interact • Thread share event sources • A thread may handle events for other threads • A thread may also handle its own events • Efficient processing of events is important

  4. HSHA and LF Design Problem • Asynchronous processing to make a system responsive • E.g., dedicated input thread (or reactive socket handling) • Services may map directly to asynchronous mechanisms • E.g., hardware interrupts, signals, asynchronous I/O, etc. • Synchronous processing may be simpler, easier to design, implement, and (especially) debug • How to bring these paradigms together? • How to achieve high performance multi-threading? • Service requests arrive from many sources • Concurrency overhead must be minimized • Context switches, locking and unlocking, copying, etc. • Threads must avoid race conditions for event sources • And for messages, buffers, and other event processing artifacts

  5. HSHA Solution • Decompose architecture into two service layers • Synchronous • Asynchronous • Add a queueing layer between them to facilitate communication event notification computation filtering classification (short duration operations) (long duration operations) Queue get (put) put (get) synchronous layer asynchronous layer

  6. LF Solution • Threads in a pool take turns accessing event sources • Waiting threads (followers) queue up for work • Thread whose turn it is acts as the “leader” • May dispatch events to other appropriate (passive) objects • May hand off events to other threads (e.g., active objects), i.e., “sorting the mail” until it finds its own work to do • Leader thread eventually takes event and processes it • At which point it leaves the event source and … • … when it’s done processing becomes a follower again … • … but meanwhile, another thread becomes leader • Protocol for choosing and activating a new leader • Can be sophisticated (queue w/ cond_t) or simple (mutex) • Should be appropriately efficient

  7. Example: “Half-Sync/Half-Reactive” • HSHA Variant • Notice asynchronous, queuing, synchronous layers (HSHA) • Generalizes dedicated input thread to multi-connection case • Easy to multi-thread synchronous layer as shown below

  8. Limitations of the HS/HR Approach • Data passing overhead • I/O thread to Worker thread • Dynamic memory allocation • Synchronization operations • Blocking factors • Context switches • May see unnecessary latency • Best case and common case may differ if work is heterogeneous • May suffer more cache misses • Due to multiple threads touching the same data

  9. Example Revised: Applying LF Pattern • Allocate a pool of application threads, as with HSHR • However, don’t separate synchronous/reactive layers • Allow threads from the pool down into the reactor • Threads from pool take turns being the leader • Leader gets an event • Looks at its destination info • ACT, handle, etc. • If for leader, just handle it • Leader leaves reactive layer • Threads “elect” a new leader • Leader enters reactor • Other threads block on CV(s) • Otherwise, leader queues it • Notify queue CV(s) • Thread(s) wake up, access queue, get their events, handle them Leader

More Related