240 likes | 401 Views
Scalable Synchronous Queues. By William N. Scherer III, Doug Lea, and Michael L. Scott. Presented by Ran Isenberg. Introduction. Queues are a way for different threads to communicate and exchange information between them.
E N D
Scalable Synchronous Queues By William N. Scherer III, Doug Lea, and Michael L. Scott Presented by Ran Isenberg
Introduction • Queues are a way for different threads to communicate and exchange information between them. • In a thread-safe, concurrent and asynchronous queue, consumers typically wait for producers to make data available. • In a synchronous queue, producers similarly wait for consumers to take the data – “a pair up”.
Hanson’s Pros & Cons Pros • High design-level tractability. • Using semaphores to target wakeups to only the single producer or consumer thread that an operation has unblocked. Cons: • May fall victim to priority inversion. • Poor performance - employs three separate blocking semaphores (locks). Possible Solution? Use non blocking synchronization!
Java 5.0 synchronous queue The Java SE 5.0 synchronous queue (below) uses a pair of queues (in fair mode; stacks for unfair mode) to separately hold waiting producers and consumers.
Java 5 Version – Pros & Cons Pros • Uses a pair of queues to separately hold waiting producers and consumers. • Allows producers to publish data items as they arrive instead of having to first awaken after blocking on a semaphore; consumers need not wait. • One transfer (put & take), requires only three synchronization operations, compared to the six incurred by Hanson’s. Cons: • Relies on one lock to protect access to both queues – creates a narrow bottleneck during runtime.
Non-Blocking Synchronization • Non blocking concurrent objects avoid mutual exclusion and locks. • Instead, they use atomic CAS operations (compare & swap) to make sure object’s invariants still hold afterwards. Totalized operations: • Operations that don't block and return a failure code in case the preconditions aren't met. • Example: dequeingfrom an empty queue will not cause the thread to block, but to return a failure code instead.
Non-Blocking Synchronization (Cont.) What isn't promised to any thread in the following scenario when using totalized operations? So, how can we fix this?
Dual Data Structures • We could register a request (reservation) for a hand-off partner. • Reservation is made in non blocking manner. • Reservation is fulfilled by checking whether a pointer has changed in the reservation. • The structure may contain both data and reservations.
Dual Data Structures (Cont.) Totalized methods are now split into 2 partial methods: reserve and follow-up. Main advantage over totalized partial methods: Order of reservations is saved.
Main Properties The algorithms I will present will be: • Contention free. • Lock free. • Scalable. • Fair & unfair implementations. • Have solid linearization points. • Waiting is done by spinning, busy waiting.
Synchronous Dual Queue • Fair implementation. • A linked list with a head & tail. • Waiting is accomplished by spinning until a pointer changes from null to non-null. • If not empty, has always a dummy node at the beginning while other nodes are always of the same type: reservation or data. • Dequeue & enqueue are symmetrical except for the direction of the data flow. • Let’s take a look at the enqueue operation.
Synchronous Dual Queue (Cont.) • First case: Queue is empty or contains only data. Add a new data node at the end of the queue and spin. • Second case (interesting): Queue contains only reservation.
Synchronous Dual Queue (Cont.) Second case code:
Synchronous Dual Stack • A singly linked list with only a head node. • Not fair. • Doesn’t have a dummy node at the beginning. • May contain either data or reservation nods but also temporarily a single node of the opposite type at the head. • Push& pop are symmetrical except for the direction of the data flow. • Let’s take a look at the push operation.
Synchronous Dual Stack (Cont.) • First case: Stack is empty or contains only data. Add a new data node at the head of the stack and spin. • Second case (interesting): Stack contains reservation at the head. Add a new data (“fulfilling”) node at the head, find a reservation to fulfill and remove both nodes. • Third case: A fulfilling node at the head. Help the fulfillment process.
Conclusion • I presented 2 non blocking synchronous queues that use dual data structures: a fair and unfair version (stack & queue). • The algorithms are all lock-free. • There is little performance cost for fairness. • High scalability can be achieved by using synchronization methods that don’t use locks. • The Algorithms are part of the Java 6 packages.