800 likes | 974 Views
Operating Systems Principles Process Management and Coordination Lecture 2: Processes and Their Interaction. 主講人:虞台文. Content. The Process Notion Defining and Instantiating Processes Precedence Relations Implicit Process Creation Dynamic Creation With fork And join
E N D
Operating Systems PrinciplesProcess Management and CoordinationLecture 2: Processes and Their Interaction 主講人:虞台文
Content • The Process Notion • Defining and Instantiating Processes • Precedence Relations • Implicit Process Creation • Dynamic Creation With fork And join • Explicit Process Declarations • Basic Process Interactions • Competition: The Critical Problem • Cooperation • Semaphores • Semaphore Operations and Data • Mutual Exclusion • Producer/Consumer Situations • Event Synchronization
Operating Systems PrinciplesProcess Management and CoordinationLecture 2: Processes and Their Interaction The Process Notion
What is a process? • A process is a program in execution. • Also, called a task. • It includes • Program itself (i.e., code or text) • Data • a thread of execution (possibly several threads) • Resources (such as files) • Execution info (process relation information kept by OS) • Multiple processes are simultaneously existent in a system.
Virtualization • Conceptually, • each process has its own CPU and main memory; • processes are running concurrently. • Many computers are equipped with a single CPU. • To achieve concurrency, the followings are needed • CPU sharing to virtualize the CPU • Virtual memory to virtualize the memory • Usually done by the kernel of OS • Each process may be viewed in isolation. • The kernel provides few simple primitives for process interaction.
Physical/Logical Concurrencies • An OS must handle a high degree of parallelism. • Physical concurrency • Multiple CPUs or Processors required • Logical concurrency • Time-share CPU
Interaction among Processes • The OS and users applications are viewed as a collection of processes, all running concurrently. • These processes • almost operate independently of one another; • cooperate by sharing memory or by sending messages and synchronization signals to each other; and • compete for resources.
Why Use Process Structure? • Hardware-independent solutions • Processes cooperate and compete correctly, regardless of the number of CPUs • Structuring mechanism • Tasks are isolated with well-defined interfaces
Operating Systems PrinciplesProcess Management and CoordinationLecture 2: Processes and Their Interaction Defining and Instantiating Processes
A Process Flow Graph User session at a workstation
serial parallel S/P Notation: execution of process p1 through pn. Serial and Parallel Processes Serial Parallel
serial parallel S/P Notation: execution of process p1 through pn. Properly Nested Process Flow Graphs A process flow graph is properly nested if it can be described by the functions S and P, and only function composition. Properly Nested
Properly Nested Process Flow Graphs improperly Nested Properly Nested
Example:Evaluation of Arithmetic Expressions Expression tree Process flow graph
Implicit Process Creation • Processes are created dynamically using language constructs • no process declaration. • cobegin/coend • syntax: cobeginC1//C2// … //Cncoend • meaning: • All Ci may proceed concurrently • When allterminate, the statement following cobegin/coend continues.
Implicit Process Creation C1;C2;C3;C4; cobeginC1//C2//C3//Cncoend
Example: Use of cobegin/coend Initialize; cobegin Time_Date // Mail // Edit; cobegin Complile; Load; Execute // Edit; cobegin Print // Web coend coend coend; Terminate User session at a workstation
Data Parallelism • Same code is applied to different data • The forall statement • syntax: forall (parameters) statements • Meaning: • Parameters specify set of data items • Statements are executed for each item concurrently
Example: Matrix Multiplication • Each inner product is computed sequentially • All inner products are computed in parallel • forall ( i:1..n, j:1..m ){ • A[i][j] = 0; • for ( k=1; k<=r; ++k ) • A[i][j] = A[i][j] + B[i][k]*C[k][j]; • }
explicit process creation Explicit Process Creation • cobegin/coend • limited to properly nested graphs • forall • limited to data parallelism • fork/join • can express arbitrary functional parallelism implicit process creation
The fork/join/quit primitives • Syntax: forkx • Meaning: • create new process that begins executing at label x • Syntax: joint,y • Meaning: t = t–1; if (t==0) goto y; • The operation must be indivisible. (Why?) • Syntax: quit • Meaning: • Process termination
Example Synchronization needed here. Use down-counter t1=2 and t2=3 for synchronization. t1 t2
The starting point of process pi has label Li. Example • t1 = 2; t2 = 3; • L1: p1; fork L2; fork L5; fork L7; quit; • L2: p2; fork L3; fork L4; quit; • L5: p5; join t1,L6; quit; • L7: p7; join t2,L8; quit; • L4: p4; join t1,L6; quit; • L3: p3; join t2,L8; quit; • L6: p6; join t2,L8; quit; • L8: p8; quit; t1 t2
The Unix fork procid = fork(); if (procid==0) do_child_processing else do_parent_processing
Replicates calling process. • Parent and child are identical • except for the value of procid. The Unix fork Use procid to diverge parent and child. procid = fork(); if (procid==0) do_child_processing else do_parent_processing
Explicit Process Declarations • Designate piece of code as a unit of execution • Facilitates program structuring • Instantiate: • Statically (like cobegin) or • Dynamically (like fork)
Explicit Process Declarations Syntax: process p { declarations_for_p; executable_code_for_p; }; process type p { declarations_for_p; executable_code_for_p; };
Example:Explicit Process Declarations process p{ process p1{ declarations_for_p1; executable_code_for_p1; } processtype p2{ declarations_for_p2; executable_code_for_p2; } other_declaration_for_p ; ... q = new p2; ... } declarations_for_p; executable_code_for_p;
similar to fork Example:Explicit Process Declarations process p{ process p1{ declarations_for_p1; executable_code_for_p1; } processtype p2{ declarations_for_p2; executable_code_for_p2; } other_declaration_for_p ; ... q = new p2; ... } similar to cobegin/coend
Operating Systems PrinciplesProcess Management and CoordinationLecture 2: Processes and Their Interaction Basic Process Interactions
Competition and Cooperation • Competition • Processes compete for resources • Each process could exist without the other • Cooperation • Each process aware of the other • Processes Synchronization • Exchange information with one another • Share memory • Message passing
Resource Competition Shared source, e.g., common data
Resource Competition • When several process may asynchronously access a common data area, it is necessary to protect the data from simultaneous change by two or more processes. • If not, the updated area may be left in an inconsistent state.
Example x = 0; cobegin p1: ... x = x + 1; ... // p2: ... x = x + 1; ... coend What value of x should be after both processes execute?
R1 = 0 R1 = 1 x = 1 R2 = 1 R2 = 2 x = 2 Example p1: . . . R1 = x; R1 = R1 + 1; x = R1; . . . p2: . . . R2 = x; R2 = R2 + 1; x = R2; . . .
R1 = 0 R1 = 1 R2 = 0 x = 1 R2 = 1 x = 1 Example p1: . . . R1 = x; R1 = R1 + 1; x = R1; . . . p2: . . . R2 = x; R2 = R2 + 1; x = R2; . . .
The Critical Section (CS) • Any section of code involved in reading and writing a share data area is called a critical section. • Mutual Exclusion At most one process is allowed to enter critical section.
cobegin • p1: while(1) {CS1; program1;} • // • p2: while(1) {CS2; program2;} • // • ... • // • pn: while(1) {CSn; programn;} • coend The Critical Problem • Guarantee mutual exclusion: At any time, at most one process executing within its CSi.
The Critical Problem • Guarantee mutual exclusion: At any time, at most one process executing within its CSi. • In addition, we need to preventmutual blocking: • Process outside of its CS must not prevent other processes from entering its CS. (No “dog in manger”) • Process must not be able to repeatedly reenter its CS and starve other processes (fairness) • Processes must not block each other forever (deadlock) • Processes must not repeatedly yield to each other (“after you”--“after you” livelock)
Software Solutions • Solve the problem without taking advantage of special machine instructions and other hardware.
Algorithm 1 • int turn = 1; • cobegin • p1: while (1) { • while (turn==2); /*wait*/ • CS1; • turn = 2; • program1; • } • // • p2: while (1) { • while (turn==1); /*wait*/ • CS2; • turn = 1; • program2; • } • coend
Algorithm 1 • int turn = 1; • cobegin • p1: while (1) { • while (turn==2); /*wait*/ • CS1; • turn = 2; • program1; • } • // • p2: while (1) { • while (turn==1); /*wait*/ • CS2; • turn = 1; • program2; • } • coend • Mutual Exclusion • No mutual blocking • No dog in manger • Fairness • No deadlock • No livelock What happens if p1 fail?
Algorithm 2 • int c1 = 0, c2 = 0; • cobegin • p1: while (1) { • c1 = 1; • while (c2); /*wait*/ • CS1; • c1 = 0; • program1; • } • // • p2: while (1) { • c2 = 1; • while (c1); /*wait*/ • CS2; • c2 = 0; • program2; • } • coend
Algorithm 2 • int c1 = 0, c2 = 0; • cobegin • p1: while (1) { • c1 = 1; • while (c2); /*wait*/ • CS1; • c1 = 0; • program1; • } • // • p2: while (1) { • c2 = 1; • while (c1); /*wait*/ • CS2; • c2 = 0; • program2; • } • coend • Mutual Exclusion • No mutual blocking • No dog in manger • Fairness • No deadlock • No livelock What happens if c1=1 and c2=1?
Algorithm 3 • int c1 = 0, c2 = 0; • cobegin • p1: while (1) { • c1 = 1; • if (c2) c1=0; • else{ • CS1; • c1 = 0; • program1; • } • } • // • p2: while (1) { • c2 = 1; • if (c1) c2=0; • else{ • CS2; • c2 = 0; • program2; • } • } • coend
Algorithm 3 • int c1 = 0, c2 = 0; • cobegin • p1: while (1) { • c1 = 1; • if (c2) c1=0; • else{ • CS1; • c1 = 0; • program1; • } • } • // • p2: while (1) { • c2 = 1; • if (c1) c2=0; • else{ • CS2; • c2 = 0; • program2; • } • } • coend • Mutual Exclusion • No mutual blocking • No dog in manger • Fairness • No deadlock • No livelock When timing is critical, fairness and livelock requirements may be violated.
Algorithm 3 • p1: while (1) { • c1 = 1; • if (c2) c1=0; • else{ • CS1; • c1 = 0; • program1; • } • } • p2: while (1) { • c2 = 1; • if (c1) c2=0; • else{ • CS2; • c2 = 0; • program2; • } • } May violate the fairness requirement.
Algorithm 3 • p1: while (1) { • c1 = 1; • if (c2) c1=0; • else{ • CS1; • c1 = 0; • program1; • } • } • p2: while (1) { • c2 = 1; • if (c1) c2=0; • else{ • CS2; • c2 = 0; • program2; • } • } May violate the livelock requirement.
Algorithm 4(Peterson) • int c1 = 0, c2 = 0, WillWait; • cobegin • p1: while (1) { • c1 = 1; • willWait = 1; • while (c2 && (WillWait==1)); /*wait*/ • CS1; • c1 = 0; • program1; • } • // • p2: while (1) { • c2 = 1; • willWait = 2; • while (c1 && (WillWait==2)); /*wait*/ • CS2; • c2 = 0; • program2; • } • coend
Algorithm 4(Peterson) • int c1 = 0, c2 = 0, WillWait; • cobegin • p1: while (1) { • c1 = 1; • willWait = 1; • while (c2 && (WillWait==1)); /*wait*/ • CS1; • c1 = 0; • program1; • } • // • p2: while (1) { • c2 = 1; • willWait = 2; • while (c1 && (WillWait==2)); /*wait*/ • CS2; • c2 = 0; • program2; • } • coend • Mutual Exclusion • No mutual blocking • No dog in manger • Fairness • No deadlock • No livelock