1 / 61

Operating Systems

Operating Systems. Prof. Navneet Goyal Department of Computer Science & Information Systems BITS, Pilani. Topics for Today. Process Scheduling Criteria & Objectives Types of Scheduling Long term Medium term Short term Few simple scheduling algorithms. The Players. Queues Processes

kort
Download Presentation

Operating Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Operating Systems Prof. Navneet Goyal Department of Computer Science & Information Systems BITS, Pilani

  2. Topics for Today • Process Scheduling • Criteria & Objectives • Types of Scheduling • Long term • Medium term • Short term • Few simple scheduling algorithms

  3. The Players • Queues • Processes • Scheduler • Scheduling Algorithms

  4. Queues • Queue at the cash counter of a supermarket • Queue at the ticket counter of a movie theatre • Queue at the railway reservation counter • Queue at toll gates • Ready queue in memory STARVATION!

  5. Processes • I/O bound • CPU bound • Processes alternate between CPU & I/O • Process execution begins with a CPU burst • Process terminates with a final CPU burst (system call to terminate execution) • Characteristic of CPU burst times

  6. CPU Bursts Figure taken from text book

  7. Scheduler • Long term • Controls the degree of multiprogramming • Minutes between creation of a new process and the next • Should select a good mix of I/O and CPU bound processes to maximize utilization • Medium term • Swapping • Short term (CPU scheduler) • Fast • Typically executes every 100 ms • Typically takes 10 ms to decide which process to schedule

  8. CPU Scheduler • Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them • CPU scheduling decisions may take place when a process: 1. Switches from running to blocked state 2. Switches from running to ready state (interrupt) 3. Switches from blocked to ready (completion of I/O) 4. Terminates • Scheduling under 1 and 4 is non-preemptive whereas, 2 & 3 is preemptive

  9. Preemptive & Non-preemptive Scheduling • SO what is Preemptive & Non-preemptive scheduling? • Non-preemptive • Once CPU is allocated to a process, it holds it till it • Terminates (exits) • Blocks itself to wait for I/O Used by Windows 3.x

  10. Preemptive & Non-preemptive Scheduling • Preemptive • Currently running process may be interrupted and moved to the ready state by the OS • Windows 95 introduced preemptive scheduling • Used in all subsequent versions of windows • Mac OS X uses it

  11. Criteria & Objectives • CPU utilization – keep the CPU as busy as possible (40% – lightly loaded, 90% – heavily loaded) • Throughput – # of processes that complete their execution per time unit • Turnaround time – amount of time to execute a particular process (total time spent on the system) • Waiting time – amount of time a process has been waiting in the ready queue • Response time – In an interactive system, TAT may not be the best criterion. amount of time it takes from when a request was submitted until the first response is produced, not the time it takes to output the response

  12. Optimization Criteria • Max CPU utilization • Max throughput • Min turnaround time • Min waiting time • Min response time

  13. Types of Scheduling • CPU scheduling • I/O scheduling • decision as to which process’s pending request shall be handled by an available I/O device

  14. Short-Term Scheduling • Known as the dispatcher • Executes most frequently • Invoked when an event occurs • Clock interrupts • I/O interrupts • Operating system calls

  15. Process Scheduling Example

  16. 5 0 10 15 20 First-Come-First-Served (FCFS) • Each process joins the Ready queue • When the current process ceases to execute, the oldest process in the Ready queue is selected 1 2 3 4 5

  17. First-Come-First-Served (FCFS) • Short jobs suffer penalty (waiting time) • Favors CPU-bound processes over I/O bound processes • I/O processes have to wait until CPU-bound process completes • May result in inefficient use of both the processor & I/O devices • Reasons?Convoy effect!

  18. First-Come-First-Served (FCFS) • Not an attractive alternative for single processor systems. Must be combined with priority scheme

  19. Round Robin • Reduces the penalty that short jobs suffer with FCFS • Uses clock based preemption • A ready job is selected from the queue on a FCFS basis • Also called as ‘Time Slicing’

  20. 5 0 10 15 20 Round-Robin (q=1) 1 2 3 4 5

  21. Round-Robin (q=4) • DO IT YOURSELF!!

  22. Round-Robin • Main design issue – length of time quantum • If q very short: + short processes will move very quickly - more interrupt handling & context switching • q should be greater than time required for a typical interaction (fig) • Degenerates to FCFS in the limiting case (q is very large)

  23. Time Quantum for Round Robin • must be substantially larger than the time required to handle the clock interrupt and dispatching • should be larger then the typical interaction (but not much more to avoid penalizing I/O bound processes)

  24. Round-Robin • Drawback: Favors CPU-bound processes • A I/O bound process uses the CPU for a time less than the time quantum and then is blocked waiting for I/O • A CPU-bound process run for all its time slice and is put back into the ready queue (thus getting in front of blocked processes) • Example (q=4) A: 1-I/O-1-I/O-1-I/O-1 B: 4-I/O-4-I/O-4 C: 4-I/O-4-I/O-4 D: 4-I/O-4-I/O-4-I/O-4 • Virtual Round Robin (VRR) avoids this unfairness

  25. Virtual Round-Robin (VRR) • What is VRR? • When a running process is timed out, it joins the ready queue (FCFS queue) • When a process is blocked, it joins an I/O queue • This is normal! • New queue: Auxiliary FCFS queue • Processes are moved to this queue after being released from an I/O block • When dispatching decision is made, processes in the auxiliary queue are given preference over those in main ready queue • Figure

  26. Queuing for Virtual Round Robin

  27. Virtual Round-Robin • Process selected from the auxiliary queue runs for q-q1 time • A process dispatched from the auxiliary queue runs no longer than the basic time quantum minus the time spent running since it was selected from the ready queue • This is done to avoid monopolization by I/O bound processes

  28. Shortest Process Next (SPN or SJF) • Kind of a priority scheduling • Gives priority to a process with smallest next CPU burst • Decision mode: non preemptive • I/O bound processes will be picked first • In case of a tie between 2 processes, break the tie using FCFS • Ideally should be called as “shortest-next-CPU-burst” algorithm • Because scheduling depends on length of the next CPU burst rather than on its total length

  29. 5 0 10 15 20 Shortest Process Next (SPN or SJF) 1 2 3 4 5

  30. Shortest Process Next • Predictability of longer processes is reduced • Possibility of starvation for longer processes (bank example) • Reduces bias in favor of longer processes as compared to FCFS • Reduces the average waiting time • If estimated time for process not correct, the operating system may abort it • Provably optimal in terms of Av. Waiting Time

  31. 5 0 10 15 20 Shortest Remaining Time(Preemptive SJF) 1 2 3 4 5

  32. SJF • How to estimate the length of the next CPU burst? • Approximate SJF! • Predict the value of the next CPU burst • Philosophy: next CPU burst will be similar in length to the previous ones • Exponential Average of the measured lengths of previous CPU bursts

  33. Examples of Exponential Averaging •  =0 • n+1 = n • Recent history does not count •  =1 • n+1 =  tn • Only the actual last CPU burst counts • If we expand the formula, we get: n+1 =  tn+(1 - ) tn-1+ … +(1 -  )j tn-j+ … +(1 -  )n +1 0 • Since both  and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor

  34. Normalized Turnaround Time • TAT (Tr) does not tell us anything about the breakup between the service time (Ts) and the waiting time • Normalized TAT (NTAT) = Tr / Ts • Indicates the relative delay experienced by a process • Minimum possible value for NTAT is • Increasing value corresponds to decreasing level of service 1.0

  35. Normalized Turnaround Time • TAT = 1(wt)+1(st) = 2 • NTAT = TAT/st = 2/1 = 2 • TAT = 1(wt)+9(st) = 10 • NTAT = 10/9 = 1.1 (good service) • By looking at TAT, we get no idea about the quality of service!

  36. Highest Response Ratio Next (HRRN) • NTAT is a better metric than TAT • For each individual process we would like to reduce this ratio • Minimize its average value for all processes • Consider Response Ratio (RR) RR = (w+s)/s, where w = time spent waiting for processor s = expected service time • Min (RR) = 1.0 (when w=0)

  37. Highest Response Ratio Next (HRRN) • When current process completes or is blocked, choose the ready process with max. value of R • HRRN is attractive as it incorporates age of a process (in terms of w) • Shorter jobs are favored (small s) • Aging without service (increase in w) will increase R • Like SJF & SRT, HRRN also needs to estimate expected service time • DO the running example with HRRN

  38. 5 0 10 15 20 HRRN Example 1 2 3 4 5

  39. Priority Scheduling • SJF is a special case of a general priority scheduling algorithm • Priority associated with each process • CPU allocated to the process with highest priority • Tie broken using FCFS • SJF: the lower the next CPU burst, the higher is the priority

  40. Priority Scheduling • Preemptive • Non-preemptive

  41. Priority Scheduling • Starvation is a major problem • A steady stream of higher priority processes can prevent a lower priority process from ever getting the CPU • May get the CPU at on odd hour when the system is lightly loaded • System crashes and loses all unfinished low priority processes • When IBM 7094 was shut down in 1973 at MIT, a low priority was found which was submitted in 1967!!!

  42. Priority Scheduling • Solution – Aging • Gradually increasing the priority of a process that has been waiting in the system for long • After every t time units, increase the priority by 1

  43. Priority Scheduling • Static Priorities • Priority of a process never changes • Dynamic Priorities • Priority of a process changes with time (aging)

  44. Multi-level queue Scheduling • Single queue – processes with different priorities in a single queue • Dispatcher needs to search the highest priority process

  45. Multi-level queue Scheduling • Static Priorities

  46. Multi-level Feedback Queue Scheduling • If service time is not known, we can not use any of the following : • SJF or SPN • SRT • HRRN • Cannot focus on the time remaining to execute • Focus on time spent on processor • Another way of penalizing processes that have been running longer and giving preference to shorter jobs

  47. Multi-level Feedback Queue Scheduling • Quantum based preemption • Dynamic priority mechanism • With each quantum based preemption, a process is demoted to the next lower-priority queue • A short process will complete quickly without slipping very far down the hierarchy of ready queues • A longer process will gradually drift down

  48. Feedback Scheduling • Newer shorter processes are favored over older, longer processes • In each ready queue, except the lowest priority queue, FIFO queue is used • Lowest priority queue is treated in RR fashion • Scheme is called as multi-level feedback scheme

  49. Example of Multilevel Feedback Queue • Three queues: • Q0 – RR with time quantum 8 milliseconds • Q1 – RR time quantum 16 milliseconds • Q2 – FCFS • Scheduling • A new job enters queue Q0which is servedFCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1. • At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2.

  50. Multilevel Feedback Queues

More Related