1 / 76

Chapter 6 CPU Scheduling

Chapter 6 CPU Scheduling. Chapter 6: CPU Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation. Basic Concepts. § 6.1. The objective of multiprogramming is to maximum CPU utilization

Download Presentation

Chapter 6 CPU Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 6CPU Scheduling

  2. Chapter 6: CPU Scheduling • Basic Concepts • Scheduling Criteria • Scheduling Algorithms • Multiple-Processor Scheduling • Real-Time Scheduling • Algorithm Evaluation

  3. Basic Concepts § 6.1 • The objective of multiprogramming is to maximum CPU utilization • CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait. (Fig. 6.1) • CPU burst distribution: a large number of short CPU bursts, and a small number of long CPU bursts. (Fig. 6.2) • An I/O-bound program typically will have many short CPU bursts. A CPU-bound program might have a few long CPU bursts. 爆裂 束縛,趨勢 continue

  4. Alternating Sequence of CPU And I/O Bursts back

  5. Histogram of CPU-burst Times back

  6. CPU Scheduler § 6.1.2 • When CPU becomes idle, OS must selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. • CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state. 2. Switches from running to ready state. 3. Switches from waiting to ready. 4. Terminates.

  7. 先佔 Preemptive Scheduling § 6.1.3 • Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. • Scheduling takes place only under 1 and 4 is nonpreemptive or cooperative, otherwise, it is preemptive. • Certain hardware platform can use cooperative scheduling only, because it does not equipped with the special hardware (ex: timer).

  8. Preemptive Scheduling • Nonpreemptive: Windows 3.x, MacOS (previous version) • Preemptive: Windows 95, MacOS for PowerPC • Unfortunately, preemptive scheduling • incurs a cost associated with coordination of access to shared data. • Has an effect on the design of the OS kernel.

  9. 分派 Dispatcher § 6.1.4 • Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: • switching context • switching to user mode • jumping to the proper location in the user program to restart that program • Dispatch latency– time it takes for the dispatcher to stop one process and start running another…should be as fast as possible.

  10. Scheduling Criteria § 6.2 使用率 • CPU utilization – keep the CPU as busy as possible • Throughput – # of processes that complete their execution per time unit • Turnaround time – amount of time to execute a particular process • Waiting time – amount of time a process has been waiting in the ready queue • Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment) 產量 回復

  11. Optimization Criteria • Max CPU utilization • Max throughput • Min turnaround time • Min waiting time • Min response time

  12. Scheduling Algorithms § 6.3 • CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU. • First-Come, First-Served Scheduling • Shortest-Job-First Scheduling • Priority Scheduling • Round-Robin Scheduling • Multilevel Queue Scheduling • Multilevel Feedback-Queue Scheduling

  13. § 6.3.1 First-Come, First-Served (FCFS) Scheduling 先到先做 Next in line to access the CPU Queue of PCBs for processes waiting to run New process enters system ....... CPU Process in control of the CPU Process finishes

  14. P1 P2 P3 0 24 27 30 § 6.3.1 First-Come, First-Served (FCFS) Scheduling • Example: ProcessBurst Time P1 24 P2 3 P3 3 • Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: • Waiting time for P1 = 0; P2 = 24; P3 = 27 • Average waiting time: (0 + 24 + 27)/3 = 17

  15. First-Come, First-Served (FCFS) Scheduling Suppose that the processes arrive in the order P2 , P3 , P1 . • The Gantt chart for the schedule is: • Waiting time for P1 = 6;P2 = 0; P3 = 3 • Average waiting time: (6 + 0 + 3)/3 = 3 • Much better than previous case. P2 P3 P1 0 3 6 30

  16. First-Come, First-Served (FCFS) Scheduling 護航 • Convoy effect: many short processes waiting for one long process to get off the CPU……lower CPU and device utilization. • FCFS scheduling algorithm is nonpreemptive.

  17. § 6.3.2 最短先做 Shortest-Job-First (SJF) Scheduling • Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time. • Two schemes: • nonpreemptive – once CPU given to the process it cannot be preempted until completes its CPU burst. • Preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF). • SJF is optimal – gives minimum average waiting time for a given set of processes. Not the total job length 最短剩餘時間先做

  18. Shortest-Job-First (SJF) Scheduling • ProcessBurst Time • P1 6 • P2 8 • P3 7 • P4 3 • Average waiting time = (3 + 16 + 9 + 0)/4 = 7 • Compare to 10.25 milliseconds of FCFS P4 P1 P3 P2 0 3 9 16 24

  19. SJF Difficulty • Knowing the length of the next CPU request is not easy. • SJF used frequently in Long-term scheduling with the user specifies the length when he submits the job. • SJF cannot be implemented at the level of short-term CPU scheduling…there is no way to know the length of the nest CPU burst. • We can only approximate SJF by predicting the next CPU burst to be similar in length to the previous ones.

  20. Predicting the next CPU burst • Next CPU burst can be predicted as an exponential average of the lengths of previous CPU bursts. predicted value for the nth CPU burst 0≦α≦1 length of the nth CPU burst relative weight past history most recent information

  21. Predicting the next CPU burst •  =0 (Recent history does not count) • n+1 = n •  =1 (Only the actual last CPU burst counts) • n+1 = t n •  = ½ (recent history and past history are equally weighted)

  22. Predicting the next CPU burst • If we expand the formula, we get: n+1 =  t n+(1 - )  t n -1+ … +(1 -  ) j  t n-j+ … +(1 -  ) n+1 0 • Since both  and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor.

  23. P4 P3 P1 P2 P1 17 1 0 5 10 26 Shortest-remaining-time-first scheduling • When a new process arrives at the ready queue while a previous process is executing, the new process may preempt the currently executing process. • ProcessArrival TimeBurst Time P1 0 8 P2 1 4 P3 2 9 P4 3 5 average time:((10-1)+(1-1)+(17-2)+(5-3))/4= 6.5 milliseconds Nonpreemptive SJF: 7.75

  24. 優先權 Priority Scheduling § 6.3.3 Measurable quantities: time limits, memory requirements, number of open files, ratio of average I/O burst to average of CPU burst Criteria external to OS: importance of the process, type and amount of funds being paid, the department sponsoring the work. • A priority number (integer) is associated with each process. CPU is allocated to the process with the highest priority • SJF is a priority scheduling where priority is the predicted next CPU burst time. • Priorities can be defined either internally or externally. • Problem : Starvation –– low priority processes may never execute. • Solution : Aging – gradually increasing the priority of processes that wait in the system for a long time. 老化

  25. 依序循環 Round Robin (RR) § 6.3.4 (環繞的知更鳥) • Designed for time-sharing systems with preemption added to the FCFS scheduling. • Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. • The average waiting time under RR is often long: The average waiting time is 17/3=5.66 milliseconds 時間量 ProcessBurst Time P1 24 P2 3 P3 3 0 4 7 10 14 18 22 26 30

  26. Round robin scheduling Process finishes Operating system removes control of the CPU from the currently executing process because of quantum expiration Next in line to access the CPU ....... CPU Process request an event for which it must wait Process in control of the CPU Circular queue of PCBs for processes in the ready state waiting to use the CPU Event is complete. PCB returns to queue so it can compete for CPU List of PCBs for processes waiting for the completion of an event, e.g., an I/O operation

  27. Round Robin (RR) • If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n -1)q time units. • Performance • q large  FIFO • q small  Called processor sharing, appears as n processes running at 1/n the speed of the real processor. (q must be large with respect to context switch, otherwise overhead is too high.)

  28. Small Time Quantum Increases Context Switches

  29. Turnaround time v.s. time quantum • The average turnaround time of a set of processes does not necessarily improve as the time-quantum size increases. • In general, the average turnaround time can be improved if most processes finish their next CPU burst in a single time quantum.

  30. Turnaround time v.s. time quantum

  31. 多層佇列 Multilevel Queue Scheduling § 6.3.5 • Ready queue is partitioned into separate queues: • Each queue has its own scheduling algorithm • The processes are assigned to one queue based on some property of the process, such as memory size, process priority, or process type. • Scheduling must be done between the queues. • Fixed priority scheduling; i.e., serve all from foreground then from background. Possibility of starvation. • Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e.,80% to foreground in RR, 20% to background in FCFS foreground (interactive) background (batch) –RoundRobin – FCFS

  32. Multilevel Queue Scheduling • There must be scheduling between the queues. Example: Fig. 6.6 (Each queue has absolute priority over lower-priority queues.) • Fixed-priority preemptive scheduling: • No process in the lower queue could run unless the higher queues were empty. • If higher level process entered the ready queue while a lower level process was running, the lower level process would be preempted. • Time slice between the queue: Each queue gets a certain portion of the CPU time.

  33. Multilevel Queue Scheduling

  34. 多層回饋佇列 Multilevel Feedback Queue § 6.3.6 • A process can move between the various queues. • If a process uses too much CPU time, it will be moved to a lower-priority queue … leaves I/O bound and interactive processes in the higher-priority queues. • A process waits too long in a lower queue may be moved to a higher-priority queue. This aging prevents starvation.

  35. Multilevel Feedback Queue • Example:Three queues: • Q0– time quantum 8 milliseconds • Q1– time quantum 16 milliseconds • Q2– FCFS • Scheduling • A new job enters queue Q0which is servedFCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1. • At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2.

  36. Multilevel Feedback Queue

  37. Multilevel feedback queuing system WAIT state Generate I/O request Queue no. 1 Process finishes Control of the CPU ................................................................ Quantum expires WAIT state Queue no. 2 Generate I/O request Process finishes Control of the CPU . . . . ................................................................ Quantum expires . . . . . . . . WAIT state . . . . Generate I/O request Queue no. n Control of the CPU Process finishes ................................................................ Quantum expires

  38. Multilevel Feedback Queue • Multilevel-feedback-queue scheduler defined by the following parameters: • number of queues • scheduling algorithms for each queue • method used to determine when to upgrade a process • method used to determine when to demote a process • method used to determine which queue a process will enter when that process needs service • Although multilevel feedback queue is the most general scheme, it is also the most complex for the necessary of selecting values for all the parameters to define the best scheduler.

  39. Two possible schemes Multiple-Processor Scheduling § 6.4 • CPU scheduling more complex when multiple CPUs are available. • We concentrate on systems in which the processors are identical – Homogeneous. • if several identical processors are available, then Load sharing can occur…provide a separate queue for each processor. • To prevent load unbalance, use a common ready queue. All processes go into one queue and are scheduled onto any available processor.

  40. Multiple-Processor Scheduling • Symmetric Multiprocessing (SMP)– each processor makes its own scheduling decisions. Each processor examines the common ready queue and selects a process to execute. • Asymmetric multiprocessing– having all scheduling decisions, I/O processing, and other system activities handled by one single processor – the master server. The other processors execute only user code. It is simpler because only one processor accesses the system data structures, alleviating the need for data sharing.

  41. Real-Time Scheduling § 6.5 • Hard real-time systems • required to complete a critical task within a guaranteed amount of time. • Need resource reservation. The scheduler know exactly how long it takes to perform each type of OS function. • Lack the full functionality of modern computers and OS. • Soft real-time computing • requires that critical processes receive priority over less fortunate ones. • Although may cause unfair allocation of resources and longer delays for some processes, it is at least possible to achieve

  42. Implementing soft real-time function • The system must have priority scheduling, and real-time processes must have the highest priority. • The dispatch latency must be small. not easy to ensure. The latency can be long since some system calls are complex and some I/O devices are slow. relatively simple to hold

  43. Real-Time Scheduling check to see whether a high-priority process needs to be run. • To keep dispatch latency low, we need to allow system calls to be preemptible. Ways to achieve this goal: • insert preemption points in long-duration system calls. • make the entire kernel preemptible. To ensure correct operation, all kernel data structures must be protected through the use of various synchronization mechanisms. If it does, a context switch takes place.

  44. Priority Inversion • Higher-priority process needs to wait when it needs to read or modify kernel data that are currently being accessed by another lower-priority process. • If there is a chain of lower-priority processes, they inherit the high priority until they are done with the resources. –––priority-inheritance protocol

  45. Dispatch Latency

  46. Algorithm Evaluation § 6.6 • When selecting a CPU scheduling algorithm for a particular system, different evaluation methods may be used: • Deterministic Modeling • Queueing Models • Simulations • Implementation

  47. Deterministic Modeling § 6.8.1 • Analytic evaluation uses the given algorithm and the system workload to produce a formula or number that evaluates the performance of the algorithm for that workload. • One type of analytic evaluation is deterministic modeling. It takes a particular predetermined workload and defines the performance of each algorithm for that workload. • Example: Consider the FCFS, SJF, and RR (quantum = 10 milliseconds). Which algorithm would give the minimum average waiting time? ProcessBurst Time P1 10 P2 29 P3 3 P4 7 P5 12

  48. Deterministic Modeling FCFS: Average waiting time is (0+10+39+42+49)/5=28 ms P5 P3 P4 P2 P1 0 10 39 42 49 61 SJF: Average waiting time is (10+32+0+3+20)/5=13 ms P3 P4 P1 P5 P2 0 3 10 20 32 61 RR: Average waiting time is (0+32+20+23+40)/5 = 23 ms P1 P2 P3 P4 P5 P2 P5 P2 0 10 20 23 30 40 50 52 61

  49. Deterministic Modeling • Deterministic modeling is simple and fast. It gives exact numbers, allowing the algorithms to be compared. • However, it requires exact number for input, and its answers apply to only those cases. • The main uses are in describing scheduling algorithms and providing examples…too specific, and requires too much exact knowledge, to be useful • Used when the same program may be ran over and over again and can measure the program’s processing requirements exactly.

  50. Queueing Models § 6.8.2 • Processes vary, no static set of processes to use for deterministic modeling. • What can be determined is the distribution of CPU and I/O bursts. • These distributions may be measured and then approximated. The result is a mathematical formula describing the probability of a particular CPU burst. • Arrival-time distribution given the distribution of times when processes arrive in the system. • From these two distribution, it is possible to compute the average throughput, utilization, waiting time, and so on for most algorithm.

More Related