1 / 24

Learning Objectives

Learning Objectives. Understanding of race conditions and the need for concurrency control. Good solution requirements. CC primitives: semaphores, monitors and locks. Deadlocks and deadlock resolutions. Concurrency Control.

Download Presentation

Learning Objectives

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning Objectives • Understanding of race conditions and the need for concurrency control. • Good solution requirements. • CC primitives: semaphores, monitors and locks. • Deadlocks and deadlock resolutions. Concurrency control

  2. Concurrency Control • Concurrency control is necessary to achieve data consistency in systems that share data, in particular it is essential in distributed systems. • ? Give a couple of examples where lack of concurrency control may lead to system inconsistency (or race conditions). Concurrency control

  3. Mutual Exclusion and Critical Section • Mutual exclusion assures that at most process accesses a shared piece in a given time moment. • Parts of the code that deal access shared data is called critical section (region). • In order to achieve Mutual Exclusion two or more processes should be in their critical sections simultaneously. Concurrency control

  4. “Good” Solution Requirements • Req. 1: Mutually exclusive access to critical section. • Req. 2: No blocking or interference of other processes (not interested in entering critical section). • Req. 3: No starvation. Concurrency control

  5. Semaphores • Semaphores are shared variables that control access to critical section. • There are two ATOMIC operations defined for semaphores: Down (Lower) and Up (Raise). • ? Study the algorithm for semaphores in Box 5.1. p. 107 and trace it for 2 processes and initial value of semaphore equal to 1 and then equal to 0. Study Box 5.2 p. 109 first. Concurrency control

  6. Semaphores and Deadlocks • Correctness of the system relies on a programmer and his/her correct use of Down and Up operations. • ? Write a code for two processes that share semaphores (eg. S1 and S2), which execution leads to a deadlock? Concurrency control

  7. Semaphore Evaluation • ? Does the Semaphore primitive fulfill the three requirements for a “good” solution to the critical section problem. • “Good” Solution Requirements • ? Solve problems 5.3. and 5.4. p. 125. Concurrency control

  8. Monitors • High level synchronization (concurrency control) primitives. • A monitor is a construct that allows for grouping of data structures in a separate module or package. • Monitors assure that no more than one process can be active within a monitor at any given time. Concurrency control

  9. Monitor Implementation • Monitors are often implemented using “binary” semaphores. • However, the semaphores are used by the compiler not a programmer thus reducing a chance of semaphore misuse. • Monitors use condition variables: wait (suspends a process) and signal (resumes a process). Concurrency control

  10. Monitor Evaluation • Study box 5.4 on page 111 for Java Monitors. • ? Do the Monitors fulfill the three requirements for a “good” solution to the critical section problem? • “Good” Solution Requirements Concurrency control

  11. Locks • Locks are another primitive to achieve mutual exclusion. • Lock control access to critical section by having two states: locked and unlocked. • If a process wants to access critical section then if checks the value of lock. Only if the lock is in unlocked state access to critical section is granted. ( see box 5.5 p. 112 for problems with implementing Locks) Concurrency control

  12. Other Approaches to Locks • Taking turns • busy/waiting or spinning • ? Is this a good solution? • Hardware support • Indivisible (atomic hardware instructions): • Test-and-Set-lock (TSL) (or check and set) • SWAP (register, memory) • ?Are TSL and SWAP a “good” solution? Concurrency control

  13. Software Lock Control • Centralized lock manager • Client (C) /Server (S) approach • messages: request (C  S), queued (C  S), granted (C  S), release(C  S) • Distributed lock manager • peer to peer approach (R-requester, P-other participant) • messages: request (R  Ps), queued (R  Ps), granted (R  Ps) Concurrency control

  14. Token Passing • Token is a special type of a message. • Token passing solution relies on a single token traveling through all the processors creating a logical ring. • Only the process that possesses a token can access critical section. Concurrency control

  15. Evaluation of Software Lock Control • ?Are the proposed methods: • centralized lock manager • distributed lock manager • token passing “good” solutions to critical section problem? • ? Solve problems 5.7 and 5.8 p. 126. ? Concurrency control

  16. Deadlocks • Four necessary conditions for deadlocks: • mutual exclusion of resources • non-preemption • hold and wait • circular wait • Back to Semaphores and Deadlocks • To Deadlock Prevention Concurrency control

  17. Resource Allocation Graphs • Resource Allocation Graphs or RAGs are used to model possible deadlocks in distributed systems. • ?Study figure 5.5 p.120 and answer the following question. Does a cycle in RAG imply a deadlock? Concurrency control

  18. Distributed Deadlocks • Distributed Deadlocks often involve a shared buffer space used for IPC. • Such deadlocks are commonly referred to as communication deadlocks. • Types of communication deadlocks: • direct store and forward (two locations) • indirect store and forward (more than two locations) Concurrency control

  19. Deadlock Resolution • Deadlock Prevention • Deadlock Avoidance • Deadlock Ignorance • Deadlock Detection Concurrency control

  20. Deadlock Prevention • Deadlock Prevention relies on designing out one of the four necessary conditions. • Sample methods: • acquire all resources or release those possessed after a timeout (?Which condition is eliminated?) • acquire resources in order (Which condition is eliminated?) • use seniority rules (time stamps) (Which condition is eliminated?) Concurrency control

  21. Deadlock Avoidance • Deadlock avoidance relies on resource allocation that moves the system from a “safe” state to a “safe” state. • The system is always in a safe state thus it never deadlocks. • Due to unrealistic assumptions and its high complexity this approach is not very practical. Concurrency control

  22. Deadlock Ignorance • Ignore Deadlocks altogether. • ?What happens if deadlock occurs in a system that ignores it? • ? Why this approach is “O.K.” in general purpose distributed systems but not acceptable in real-time systems? Concurrency control

  23. Deadlock Detection • Relies on detecting deadlocks AFTER they occur. • Once deadlock is detected, the necessary steps need to be taken in order to eliminate deadlock. • ? What would be the simples approach to eliminating a deadlock? Concurrency control

  24. Distributed Deadlock Detection • A process that is blocked waiting for a resource sends a probe (a special message) to the process that is being blocked by. • A probe consists of the following fields: • originator, sender, receiver • If the receiving process is blocked it forwards the probe to the process that is being blocked by but modifies sender and receiver fields in the probe. • If the receiving process is not blocked is discards the probe. • If originator=receiver then there is a deadlock. Concurrency control

More Related