520 likes | 733 Views
Operating Systems. Prof. Navneet Goyal Department of Computer Science & Information Systems BITS, Pilani. Topics for Today. Process Scheduling Criteria & Objectives Types of Scheduling Long term Medium term Short term. CPU & I/O Bursts.
E N D
Operating Systems Prof. Navneet Goyal Department of Computer Science & Information Systems BITS, Pilani
Topics for Today • Process Scheduling • Criteria & Objectives • Types of Scheduling • Long term • Medium term • Short term
CPU & I/O Bursts • Process execution is nothing but a sequence of CPU & I/O bursts • Processes alternate between these two states • CPU bursts distribution
CPU Scheduler • Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them • CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state 2. Switches from running to ready state 3. Switches from waiting to ready 4. Terminates • Scheduling under 1 and 4 is non-preemptive • All other scheduling is preemptive
Preemptive & Non-preemptive Scheduling • SO what is Preemptive & Non-preemptive scheduling? • Non-preemptive • Once CPU is allocated to a process, it holds it till it • Terminates (exits) • Blocks itself to wait for I/O • Requests some OS service • Used by Windows 3.x
Preemptive & Non-preemptive Scheduling • Preemptive • Currently running process may be interrupted and moved to the ready state by the OS • Windows 95 introduced preemptive scheduling • Used in all subsequent versions of windows • Mac OS X uses it
Dispatcher • Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: • switching context • switching to user mode • jumping to the proper location in the user program to restart that program (IP) • Dispatch latency – time it takes for the dispatcher to stop one process and start another running
Criteria & Objectives • CPU utilization – keep the CPU as busy as possible (40 – lightly loaded, 90 – heavily loaded) • Throughput – # of processes that complete their execution per time unit • Turnaround time – amount of time to execute a particular process (total time spent on the system) • Waiting time – amount of time a process has been waiting in the ready queue • Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)
Optimization Criteria • Max CPU utilization • Max throughput • Min turnaround time • Min waiting time • Min response time
Types of Scheduling • Long-term scheduling • the decision to add to pool of processes to be executed • Mid-term scheduling • the decision to add to the number of processes that are partially or fully in memory • Short-term scheduling • decision as to which available process will be executed • I/O scheduling • decision as to which process’s pending request shall be handled by an available I/O device
Long-Term Scheduling • Determines which programs are admitted to the system for processing • Controls the degree of multiprogramming • More processes, smaller percentage of time each process is executed
Medium-Term Scheduling • Part of the swapping function • Based on the need to manage the degree of multiprogramming
Short-Term Scheduling • Known as the dispatcher • Executes most frequently • Invoked when an event occurs • Clock interrupts • I/O interrupts • Operating system calls
5 0 10 15 20 First-Come-First-Served (FCFS) • Each process joins the Ready queue • When the current process ceases to execute, the oldest process in the Ready queue is selected 1 2 3 4 5
First-Come-First-Served (FCFS) • Short jobs suffer penalty • Favors CPU-bound processes over I/O bound processes • I/O processes have to wait until CPU-bound process completes • May result in inefficient use of both the processor & I/O devices • Reasons?
First-Come-First-Served (FCFS) • Not an attractive alternative for single processor systems. Must be combined with priority scheme
Round Robin • Reduces the penalty that short hobs suffer with FCFS • Uses clock based preemption • A ready job is selected from the queue on a FCFS basis • Also called as ‘Time Slicing’
5 0 10 15 20 Round-Robin (q=1) 1 2 3 4 5
Round-Robin (q=4) • DO IT YOURSELF!!
Round-Robin • Main design issue – length of time quantum • If q very short: + short processes will move very quickly - more interrupt handling & context switching • q should be greater than time required for a typical interaction (fig) • Degenerates to FCFS in the limiting case
Round-Robin • Drawback: relative treatment of CPU-bound & I/O bound processes • Reason? • Virtual Round Robin (VRR) avoids this unfairness • SO what is VRR?
Virtual Round-Robin • When a running process is timed out, it joins the ready queue (FCFS queue) • When a process is blocked, it joins an I/O queue • This is normal! • New queue: Auxiliary FCFS queue • Processes are moved to this queue after being released from an I/O block • When dispatching decision is made, processes in the auxiliary queue are given preference over those in main ready queue • Figure
Virtual Round-Robin • Process selected from the auxiliary queue runs for q-q1 time • q1 is the time it ran the last time when selected from the ready queue • WHY q – q1??
5 0 10 15 20 Shortest Process Next (SPN or SJF) • Non-preemptive policy • Process with shortest expected processing time is selected next • Short process jumps ahead of longer processes 1 2 3 4 5
Shortest Process Next • Predictability of longer processes is reduced • If estimated time for process not correct, the operating system may abort it • Possibility of starvation for longer processes • Reduces bias in favor of longer processes, it is still not desirable because of lack of preemptions
5 0 10 15 20 Shortest Remaining Time • Preemptive version of shortest process next policy • Must estimate processing time 1 2 3 4 5
Analysis of Scheduling Algorithms • Predictability of longer processes is reduced • If estimated time for process not correct, the operating system may abort it • Possibility of starvation for longer processes
Normalized Turnaround Time • TAT (Tr) does not tell us anything about the breakup between the service time (Ts) and the waiting time • Normalized TAT (NTAT) = Tr / Ts • Indicates the relative delay experienced by a process • Minimum possible value for NTAT is • Increasing value corresponds to decreasing level of service 1.0
Normalized Turnaround Time • TAT = 1+1 = 2 • NTAT = 2/1 = 2 • TAT = 1+9 = 10 • NTAT = 10/9 = 1.1 (good service) • By looking at TAT, we get no idea about the quality of service!
Some Final Remarks • FCFS and RR (including VRR) do not require the information about service time!! • SJF & SRT both need to know or at least estimate the processing time of each process • Exponential Smoothing • Formula and discussion
Highest Response Ratio Next (HRRN) • NTAT is a better metric than TAT • For each individual process we would like to reduce this ratio • Minimize its average value for all processes • Consider R = (w+s)/s, where R is response ratio w = time spent waiting for processor s = expected service time • Min (R) = 1.0 (when w=0)
Highest Response Ratio Next (HRRN) • When current process completes or is blocked, choose the ready process with max. value of R • HRRN is attractive as it incorporates age of a process (in terms of w) • Shorter jobs are favored (small s) • Aging without service (increase in w) will increase R • Like SJF & SRT, HRRN also needs to estimate expected service time • DO the running example with HRRN
5 0 10 15 20 HRRN Example 1 2 3 4 5
Feedback Scheduling • If service time is not known, we can not use any of the following : • SJF or SPN • SRT • HRRN • Focus on time spent on processor • Another way of penalizing that have been running longer and giving preference to shorter jobs
Feedback Scheduling • Quantum based preemption • Dynamic priority mechanism • With each preemption, a process is demoted to the next lower-priority queue • A short process will complete quickly without slipping very far down the hierarchy of ready queues • A longer process will gradually drift down
Feedback Scheduling • Newer shorter processes are favored over older, longer processes • In each ready queue, except the lowest priority queue, FCFS is used • Lowest priority queue is treated in RR fashion • Scheme is called as multi-level feedback scheme
5 0 10 15 20 Feedback Scheduling: Example q=1 1 2 3 4 5
5 0 10 15 20 Feedback Scheduling: Example q=2i 1 2 3 4 5
Comparison • TAT • NTAT
Fair Share Scheduling • All scheduling algorithms discussed so far treat collection of ready processes as a homogeneous collection • Generally a user application or job consists of many processes • This structure is not recognized by traditional schedulers • User is per se not interested in how an individual process performs • Interested in how set of processes which constitute a single application, performs
Fair Share Scheduling • Introduces two levels of abstraction viz., users & groups • Users of the same department - one group • Attempt to give each group/user similar service • If large no. of users from one dept. log on to the system, we make sure that the response times of members of only that dept. suffers rather than that of other depts.
Fair Share Scheduling • FSS is all about making scheduling decisions based on process sets rather than on basis of individual processes • Each group/user is assigned a share of the processor • Considers the execution history of a related group of processes, along with the individual execution history of each process in making scheduling decisions • Scheduling is based on priority
Fair Share Scheduling • Each group is allocated a weight Wk • 0<= Wk<=1 & • Each process is assigned a base priority • Priority of a process drops as the process and its group uses the processor • Greater the weight assigned to a group, the less its utilization will affect its priority • Priorities are revised every fixed interval of time • CPU utilization is measured using clock ticks
Problem • 4 processes • 3 groups • Processor is interrupted 60 times per second • During each interrupt the processor usage field of the currently running process is incremented as is the corresponding group processor usage field • Once per second the priorities are recalculated • using FSS algorithm draw the Gantt chart for the first 5 seconds
Coming Up … • Process Coordination • Synchronization • Deadlocks