590 likes | 900 Views
Process concept. Process concept. Run time environment. Process implementation. Thread concept and implementation. Process Synchronization. Concept of Multiprogramming. When there is a single program running in the CPU, it leads to the degradation of the CPU utilization.
E N D
Process concept • Process concept. • Run time environment. • Process implementation. • Thread concept and implementation. • Process Synchronization
Concept of Multiprogramming • When there is a single program running in the CPU, it leads to the degradation of the CPU utilization. • Example: When a running program initiates an I/O operation, the CPU remain idle until the I/O operation is completed. Solution to this problem is provided by Multiprogramming.
Multiprogramming Continued.. Definition: A mode of operation that provides for the interleaved execution of two or more programs by a single processor.
Multiprogramming Continued.. Improving CPU utilization By allowing several programs to reside in main memory at the “same time” the CPU might be shared, such that when one program initiates an I/O operation, another program can be assigned to the CPU, thus the improvement of the CPU utilization.
Multiprogramming Continued.. Implementation The concept of process need to be introduced in order to understand multiprogramming and the concurrent execution of sequential processes in a computer system. What is a process?
Process • Definition: • A program in execution • An asynchronous activity • The “locus of control” of a procedure in execution • It is manifested by the existence of a process control block (PCB) in the operating system.
Process States A state of a process describes the activity that the process is doing at a certain moment in time. New : A newly created process, not in the ready queue. Ready : It can use the CPU , if available. Running : If it is currently in the CPU. Waiting : Waiting for some event ex: I/O Abend : Stops executing due to an error. End : Finished executing properly. Suspended: Stops temporarily while the OS uses the CPU
States of processes (Please label the arrows naming the event) SUSPENDED END NEW RUNNING READY WAITING I/O ABEND
Causes of state change When a process executes, it changes states and interrupts cause process to change states. Current StateNew state Interrupt Running End EOP ( End of Program) Running ABEND Trap (Abnormal end) Running Waiting System Call (Start I/O) for I/O (SIO)
Depiction of state change EOP END TRAP RUNNING ABEND SIO WAITING FOR I/O
Process Continued… • The activity of a process is controlled by a data structure called Process Control Block(PCB). • A PCB is created every time a program is loaded to be executed. • So, a process is defined by PCB-Program couple.
Structure of PCB • PCB contains information about processes, for instance: • the current state of a process • Unique identification of process • Process priority • Contents of some CPU registers • Instruction Pointer (IP), also known as PC • Base and limit registers • Time limits and I/O status information
Structure of PCB Contd… ….. Pointer to next PCB RUN TIME ENVIRONMENT Process name or ID STACK Base code Limit Code …. CODE IP or PC Stack Pointer Thread Control Block (TCB) Registers State MODE Interrupt Flags .….
Process Continued… A thread is known as “the unit of execution” of a process and it is represented by the Thread Control Block (TCB). The TCB consists of four fields: PC, stack pointer, the contents of some registers, and the state.
Process Continued… We can now observe how each stage of a process takes place by the aid of a state diagrams. Process creation : • An OS can create one or more processes, via a create-process system call. • During the course of execution an user process may create new processes or threads as well. In this case, the creating process (thread) is called the parent and the created (new) process (thread) is named the child.
Process Creation New Create Ready The process is created and then inserted at the back of the ready queue, it moves to the head of the queue according to a scheduling policy.
Process Creation Contd… Program Load PCB OS code stack Create
Process Creation (closer view) Cont… Create OS stack Load PCB Program file heap header bss data data code code symbol table Process working space Disk Memory
Process working space (run-time environment) stack Dynamic link, return address, Local variables, function Parameters, … malloc(n); heap Dynamically allocated variables int z; bss Global and static variables int y = 7; data Constants / initialized data x = 4; Program text (write protected) code Process working space
Process working space (run-time environment) Process working space sum Local Local bss: means “block started by symbol” and has that name for historical reasons. void sub(float total, int part ) { int List[5]; float sum; … } Local Local stack Local Local Parameter Parameter malloc(n); heap int z; bss Dynamic Link int y = 7; data Return Address x = 4; code
Thread working space (run-time environment) Multithreading: Each thread is a unit of execution. stack-T1 Multithreaded processes need a stack per thread. All threads shared the same address space. Each thread has is own TCB. stack-T2 stack-main heap bss data code
Thread working space (run-time environment) Multithreading: All threads share the CPU-PC and CPU-SP. In this picture thread-T2 is using the CPU. PCB stack-T1 Process ID TCB-T1 SP PC regs state TCB-T2 stack-T2 regs state SP PC TCB-main CPU regs state SP PC stack-main sp . . . pc heap regs Open files bss Other resources data . . . code
Thread working space (run-time environment) Multithreading: All threads share the CPU-PC and CPU-SP. In this picture thread-T1 is using the CPU. PCB stack-T1 Process ID TCB-T1 SP PC regs state TCB-T2 stack-T2 regs state SP PC TCB-main CPU regs state SP PC stack-main sp . . . pc heap regs Open files bss Other resources data . . . code
Ready to Running Dispatcher RUNNING READY Timer Interrupt When a process reaches the head of the ready queue and the CPU is available, the process is dispatched which means that the CPU is assigned to the process. This cause a transition from the ready state to the running state. When the time slice of the running process expires it goes back to the ready state.
Process Continued… As the OS switches the allocation of CPU among processes it uses the PCB to store the CPU information or context, which represents the state of the process. In the previous example we have seen how the OS performed a context switch between processes P2(from Running to Ready) and P1(from Ready to Running). When a context switch occurs we need to save the state of the running process in its PCB and load the state of the new process in the CPU.
Case of Timer interrupt Dispatch P2 P1 RUNNING READY Timer Interrupt After a timer interrupt, the OS move P1 back to the ready state and the CPU Is assigned to P2.
Context switching P1 OS P2 EXECUTING READY TIMER Save state into PCB1 READY READY Reload state from PCB2 DISPATCH READY EXECUTING TIMER Save state into PCB2 READY READY Reload state from PCB1 DISPATCH EXECUTING READY
Case of I/O interrupt RUNNING READY Start I/O (System call) WAITING I/O Interrupt
Context switching P3 OS P2 EXECUTING SIO Save state into PCB3 Put P3 into the I/O queue READY WAITING IN I/O QUEUE EXECUTING Reload state from PCB2 DISPATCH IN I/O STATE WAITING EXECUTING I/O INTERRUPT Suspend P2 HANDLE I/O READY STATE SUSPENDED Put P3 in Ready state Resume P2 RESUME READY STATE EXECUTING WAITING
Handling I/O: Here the process waiting in the I/O queue is moved back to the ready state after the I/O request is completed.
Concurrency • Definition: Two or more processes execute concurrently when they execute different activities on different devices at the same time.
Concurrency Contd.. RUNNING READY Process 1 Process 3 WAIT ON DEVICE 1 WAIT ON DEVICE 2 Process 2
Concurrency Contd.. • In a multiprogramming system CPU time is multiplexed among a set of processes. • Users like to share their programs and data and the OS must keep information integrity. • Processes are allowed to use shared data through threads. The concurrent access to shared data may result in data inconsistency.
Concurrency Contd.. Example : Consider a shared variable X =4 and two programs running concurrently. Let us assume that P1 is running and a timer interrupt occurs after executing “Load X”. P1 P2 { { Load X Load X X X+10 X X+2 Store X Store X } } Timer interrupt
Concurrency Contd.. Once P1 is interrupted, The OS gathers the context and saves it in the PCB associated to P1, say PCB1. The value “4” is loaded from one of the CPU registers into PCB1. As the CPU is idle, The OS assigns it to process two and P2 starts execution, enters the critical section and loads the value “4” from the shared variable “X”. In its critical section P2 executes “X = X + 2” and run to completion leaving behind variable “X = 6”. After process P2 finishes execution, P1 resumes execution restoring the value “4” in a CPU register, and the value of X becomes X = 4 + 10 =14. The final result will be “14”(which is wrong)!
Concurrency Contd.. • Consider the case when P1 executes completely. The value of X will become: X = 4 + 10 = 14 The process P2 executes and the value of X will be changed to: X = 14 + 2 = 16 (the correct answer)
Concurrency Contd.. • Here there are two different values for the same variable X depending on the order of execution of the instructions. • This is called a Race Condition. • It occurs when processes access shared variables without using an appropriate synchronization mechanism.
Race Condition • Definition: A race condition is an undesirable situation that occurs when two or more operations manipulate shared data concurrently and the outcome depends on the particular order the operations occur. In order to avoid a race condition, it is to necessary to ensured that only one process, at a time, has exclusive access to the shared data.
Race Condition Contd.. The prevention of other process from accessing a shared variable, while one process is accessing it, is called mutual exclusion In order to guarantee mutual exclusion we need some kind of synchronization mechanism. In most synchronization schemes a physical entity must be used to represent a resource. This entity is often called Lock Byte or Semaphore.
Process Synchronization • Concept of Critical Section: A Critical Section is the segment of code where a shared variable is used. If several processes are accessing a shared variable when one process is in its critical section, no other process is allowed to enter its critical section.
Process Synchronization contd.. • Each process must request permission to enter the critical section (CS). • A solution to CS problem must satisfy the following requirements 1. Mutual exclusion 2. Progress
Process Synchronization contd.. • Mutual exclusion: When a process is executing in the critical section other processes can not execute their critical sections. • Progress: If no process is executing in its critical section and there are processes that wish to enter the critical section, only one of them can enter the critical section.
Process Synchronization contd.. • Test and Set Before entering the critical section we need to execute a Lock(x) operation and an Unlock(x) operation before leaving the CS. P1 P2 . . . . Lock(x) Lock(x) { { CS CS } } Unlock(x) Unlock(x)
Process Synchronization contd.. • If a system implements Test and Set as a hardware instruction, we can implement mutual exclusion with the help of a Boolean variable, TS, that is initialized to “0” and two operations. LockUnlock Label: If TS = 1 then goto Label TS 0 else TS 1 This is implemented in hardware
Process Synchronization contd.. • The main disadvantage here is that when one process is in the critical section all other processes only used the CPU to execute Test and Set. • This is called busy waiting. • To overcome this problem the concept of Semaphores was proposed by Dijkstra in his work E. W. Dijkstra. Cooperating Sequential Processes. Programming Languages (F. Genuys, ed.), Academic Press, New York, 1968
Concept of Semaphores • Semaphores: A semaphore S combines an integer variable and a pointer that can be is accessed only through two standard “atomic” operations. • P(s) • V(s)
Concept of Semaphores • Whena process executes a P(s)operation and finds that the semaphore value is not positive the process blocks itself, and the OS places the process in the semaphore waiting queue. • The process will be restarted when some other process executed the V(s)operation, which changes the process state from waiting to ready.
Semaphores contd.. • The operations were originally named as: P means Wait V means Signal S value Semaphore queue PCB PCB PCB
Semaphores contd.. • The semaphore operations can be defined as follows P(S) : inhibit interrupts S.value := S.value -1 if S.value < 0 then { add this process to S.queue } end; enable interrupts