190 likes | 318 Views
Liveness and Performance Issues. Deadlock Starvation Livelock Lock contention Lock-free data structures. Deadlock. Dining Philosophers Wait-for graph If wait-for graph contains a cycle there is a deadlock Potential deadlock (static property) vs Actual Deadlock (dynamic property).
E N D
Liveness and Performance Issues • Deadlock • Starvation • Livelock • Lock contention • Lock-free data structures
Deadlock • Dining Philosophers • Wait-for graph • If wait-for graph contains a cycle there is a deadlock • Potential deadlock (static property) vs Actual Deadlock (dynamic property)
Avoiding Deadlocks by Choosing Lock Ordering • “A program will be free of lock-ordering deadlocks if all threads acquire the locks they need in a fixed global order” • Needs a precise definition of “fixed global order” • Consider dining philosophers • 0 before 1 • 1 before 2 • 2 before 3 • 3 before 4 • 4 before 0 • Seems to be a “fixed global order” but allows deadlock • In discrete math this would not be considered an “order” because it is cyclic
Programming Strategies to Avoid Deadlock • For intrinsic locks, deadlocks are associated with nested synchronized blocks • avoid them when possible • E.g. what we did to fix Vector.add for the exam • Difficult to analyze when calling alien methods • Is the alien method synchronized or not? Hard to tell; not part of the formal interface • Use open calls – calls that hold no locks – when calling an alien method
Programming Strategies (2) • Induced lock ordering • Apologies to Nick T. • System.identityHashCode(o) – based on the address of an object int h1 = System.identityHashCode(o1); int h2 = System.identityHashCode(o2); if (h1<h2) synchronized(o1) {synchronized(o2) {…}}} else if (h2<h1) synchronized(o2) {synchronized(o1) {…}}} else // bad luck! synchronized(tieLock) {synchronized(o1) {synchronized(o2) {…}}}} • Unappealing or unworkable for large object collections
Strategies (3)java.util.concurrent.locks.Lock classes Method Summary void lock() Acquires the lock. void lockInterruptibly() Acquires the lock unless the current thread is interrupted. boolean tryLock() Acquires the lock only if it is free at the time of invocation. boolean tryLock(long time, TimeUnit unit) Acquires the lock if it is free within the given waiting time and the current thread has not been interrupted. void unlock() Releases the lock.
On using timeouts • The Lock classes allow lock acquisition to be done interruptibly and with a timeout • wait() and its java.util.concurrent.locks analog also allow timeouts • Using Lock objects instead of intrinsic synch. can help avoid system-wide catatonia, however, • Choice of appropriate timeout values can be difficult • System can become time-out driven (i.e. crawl)
Locks are not the only problem • More generally, access to resources • Locks • Resource pools – connections, threads • Computational results from a Future • Process priority if operating using the “run a highest priority process” rule • Different potential for deadlock on uniprocessors and multiprocessors
Starvation • Even if no deadlock occurs a thread’s work may not actually get done • CPU or other resource continually granted to some other purpose • Use of priorities is often the source of the problem • In Java priorities are only platform-specific hints • In other systems they may actually mean something specfic • Either is a danger
The “stable priority inversion problem” • Thread A is at high priority, waiting for result or resource from Thread C at low priority • Thread B at intermediate priority is CPU-bound • Thread C never runs, hence thread A never runs • Almost lost a Mars-lander mission due to this problem • Fix: when a high priority thread waits for a low priority thread, boost the priority of the low-priority thread
Livelock • Several threads spend all their time trying to synchronize instead of getting any work done • “After you.” No, “after you.” • Easy solution: introduce some randomness
Performance Issues • Uncontended locking is not typically a big performance penalty • Contended locking • Hurts scalability: the ability to add more resources to improve performance/solve bigger problems • Hurts performance: spend an increasing amount of time processing the locking code itself • Reducing contention • Reduce the duration for which locks are held (get in get out) • Reduce the frequency at which locks are requested (lock splitting and striping) • Replace exclusive locks with coordination mechanisms that allow greater concurrency (reader-writer locks are a simple example) • But first make it right and only then make it fast.
Or don’t use locking at all! • Last 10 years have seen research in so-called “lock-free” data structures • Use the low-level atomic operations typically used to implement locks to instead manipulate data structures directly.
Processor Atomic Operations • All of the steps in the code below are performed atomically by the processor. • Compare and swap definition boolean CAS(word *address, word old, word new) { if *address == old { *address = new; return true } else return false; } • Fetch and add definition void FAA(word *address, word n) { *address += n; }
Implementing an intrinsic lock operation with CAS while (true) { while (!CAS(&o.slock, false, true)) {} if (o.lockOwner == currentThreadId) { o.lockCount++; break; } else if (o.lockOwner == null) { o.lockCount = 1; o.lockOwner = currentThreadId; break; } else Scheduler.enqueueAndSleep(o); } o.slock = false;