1 / 31

Chapter 6 - Concurrent Processes

Chapter 6 - Concurrent Processes. CIS106 Microcomputer Operating Systems Gina Rue CIS Faculty. Ivy Tech State College Northwest Region 01. Introduction - Concurrent Processes. Multiprocessing systems have more than one CPU

lbaum
Download Presentation

Chapter 6 - Concurrent Processes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 6 - Concurrent Processes CIS106 Microcomputer Operating Systems Gina Rue CIS Faculty Ivy Tech State College Northwest Region 01

  2. Introduction - Concurrent Processes • Multiprocessing systems have more than one CPU • problems that occur in single processor systems apply to multi-processes in general • single processor with 2 or more processes • more than one processor with multiprocesses See Illustration p.125

  3. What Is Parallel Processing? • Parallel Processing, also called multiprocessing, where two or more processors operate in unison • Two or more CPUs are executing instructions simultaneously • Each CPU can have a process in the RUNNING state at the same time • Process Manager has to coordinate the activity of each processor, as well as synchronize the interaction among the CPUs

  4. What Is Parallel Processing? • Synchronization is the key to the system’s success because many things can go wrong in a multiprocessing system • The system won’t work unless every processor communicates & cooperates with every other processor • Since mid-1980s, costs of CPU hardware declined, has increased multiprocessors used in business environment today

  5. What Is Parallel Processing? • Two major forces behind multiprocessing development • enhance throughput • increase computing power • Two primary benefits • increased reliability • faster processing

  6. Typical Multiprocessing Configurations Much depends on how multiple processors are configured. Three typical configurations are: • master/slave • loosely coupled • symmetric

  7. Master/Slave Configuration • Single-processor (master) with additional (slave) processors • master processor manages entire system • well suited for front-end users (interactive) & back-end users (batch) • advantage is simplicity • disadvantages • reliability is no higher than single processor • can lead to poor use of resources • increase # of interrupts, slave/master See Fig. 6.1 p.128

  8. Loosely Coupled Configuration • Several complete computer systems, each with its own memory, I/O devices, CPU, & OS • each processor controls its own resources • each processor can communicate & cooperate with the others • job scheduling for new jobs may be assigned to processor with lightest load or best combination of output devices • if one processor fails, others can continue to work independently, difficult to detect failed processor See Fig. 6.2 p.128

  9. Symmetric Configuration • Processor scheduling is decentralized • Best implemented if the processors are all of the same type • four advantages over loosely coupled: • more reliable • uses resources effectively • balances load well • degraded gracefully in the event of system failure See Fig. 6.3 p.129

  10. Process Synchronization Software • Success hinges on the capability of the OS to make resources unavailable to other processes while it’s being used by one of them • Used resources must be locked away from other processes until it is released - critical region allow a process to finish • A mistake could leave a job waiting indefinitely

  11. Test-And-Set • single indivisible machine instruction known as “TS” • introduced by IBM for System 360/370 computers • In a machine cycle it tests if the key is available/unavailable • single bit in a storage location that can contain a zero (free) or one (busy)

  12. Test-And-Set • Simple to implement & works well for a small # of processes • two drawbacks: • starvation can occur when many processes are waiting to enter critical region • busy waiting-the waiting processes remain in unproductive, resource-consuming wait loops

  13. WAIT and SIGNAL • Modification of “TS” designed to remove busy waiting • Two operations are mutually exclusive: • WAIT activated when the process encounters a busy condition code • SIGNAL activated when the process exits the critical region, condition code is set to “free”

  14. Semaphores • A nonnegative integer variable that’s used as a flag • signals if and when a resource is free & can be used to process • Dijkstra (1965) introduced two operations to overcome process synchronization problems • P - proberen (to test) • V - verhogen (to increment) See Table 6.1 p.133

  15. Semaphores • P & V are executed by the OS in response called issued by any one process naming a semaphore as parameter • Traditional name for semaphore is mutex - Mutual Exclusion • necessary to avoid two operations attempt to execute at the same time

  16. Semaphores • Sequential computations, mutex is achieved automatically because each operation is handled in order, one at a time • Parallel computations, order of execution can change, so mutex must be explicitly stated & maintained

  17. Process Cooperation Several processes work together to complete a common task, two examples: • producers and consumers • readers and writers Each requires both mutual exclusion and synchronization, they are implemented by using semaphores

  18. Producers and Consumers Classic problem: • One process produces data that another process consumes later • It can also be expanded to several pairs of producers and consumers • Can be extended to buffers that hold records or data where process-to-process communication is required See Fig. 6.5 p.135

  19. Readers and Writers Classic problem: • Two processes need to access a shared resource such as a file or database • Combination priority policy used to prevent starvation • Readers must call two procedures: • checks whether the resources can be immediately granted for reading • checks to see if there are any writers waiting

  20. Concurrent Programming Multiprocessing can also refer to one job using several processors to execute sets of instructions in parallel • requires a programming language • requires a system to this type

  21. Applications of Concurrent Programming Monoprogramming languages • instructions are executed one at a time • sufficient for most computational purposes • easy to implement • fast enough for most users See Table 6.2 p.138

  22. Applications of Concurrent Programming By using a language that allows concurrent processing, arithmetic expressions can be processed differently • COBEGIN • COEND • indicate to a compiler which instructions can be processed concurrently

  23. Applications of Concurrent Programming When operations are performed at the same time, we increase computation speed, but also create complexity of the program language & hardware • explicit parallelism • detects which instructions can be executed in parallel • implicit parallelism • automatic detection by the compiler of instructions that can be performed in parallel See Table 6.3 p.138

  24. ADA - Agusta Ada Byron • U.S. Department of Defense early 1970s • Designed original language for embedded computer systems • Ada - high-level programming language made available to public in 1980 • modules support “information hiding” • implement concurrent programming • design made it easy to verify correctness of a program

  25. Ada - Modular Programming An Ada program would contain one or more program units that could be compiled separately and were composed of: • a specification part, which has all the information that must be visible to other units (the argument list) • a body part, made up of implementation details that don’t need to be visible to other units

  26. Ada - Modular Programming Program units can fall into any one of three types: • subprograms which are executable algorithms • packages which are collections of entities, (procedures or functions) • tasks which are concurrent computations • is the heart of the language’s parallel processing ability • key is synchronization of the tasks

  27. Ada - The wave of the future? Landmark language: • researchers find it helpful because of parallel processing power • modular design is appealing to application programmers and systems analysts • tasking capabilities appeal to designers of database systems & other applications that require parallel processing • some universities offer Ada courses majoring in computer systems

  28. Summary • Multiprocessing systems have two or more CPUs that must be synchronized by the Process Manager • Each processor must communicate with the other • Multiprocessor configurations • master/slave • loosely coupled • symmetric

  29. Summary • Multiprocessing also occurs in single processor systems between interacting processes that obtain control of the CPU at different times • Success depends on the ability of the system to synchronize the processors or processes & the system’s other resources

  30. Summary • Mutual exclusion helps keep the processes having allocated resources from becoming deadlocked • test-and-set • WAIT and SIGNAL • semaphores (P - proberen,V – verhogen, and mutex)

  31. Summary • Hardware & Software are used to synchronize processes • Synchronization problems • missed waiting customers • synchronization of producers & consumers • mutual exclusion of readers & writers

More Related