1 / 19

Lecture 7: Scheduling

This lecture covers preemptive/non-preemptive scheduling, CPU bursts, scheduling policy goals and metrics, and basic scheduling algorithms such as FCFS, RR, SJF, SRF, priority, multilevel queue, multilevel feedback queue, and Unix scheduling. It also explores the types of CPU schedulers, CPU bursts, predicting CPU burst length, and scheduling policy considerations for system-oriented and user-oriented goals.

asweeney
Download Presentation

Lecture 7: Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 7: Scheduling • preemptive/non-preemptive scheduler • CPU bursts • scheduling policy goals and metrics • basic scheduling algorithms: • First Come First Served (FCFS) • Round Robin (RR) • Shortest Job First (SJF) • Shortest Remainder First (SRF) • priority • multilevel queue • multilevel feedback queue • Unix scheduling

  2. Types of CPU Schedulers • CPU scheduler (dispatcher or short-term scheduler) selects a process from the ready queue and lets it run on the CPU • types: • non-preemptive executes when: • process is terminated • process switches from running to blocked • simple to implement but unsuitable for time-sharing systems • preemptive executes at times above and: • process is created • blocked process becomes ready • a timer interrupt occurs • more overhead, but keeps long processes from monopolizing CPU • must not preempt OS kernel while it’s servicing a system call (e.g., reading a file) or otherwise OS may end up in an inconsistent state

  3. CPU Bursts • process alternates between • computing – CPU burst • waiting for I/O • processes can be loosely classified into • CPU-bound — does mostly computation (long CPU burst), and very little I/O • I/O-bound — does mostly I/O, and very little computation (short CPU burst)

  4. Predicting the Length of CPU Burst • some schedulers need to know CPU burst size • cannot know deterministically, but can estimate on the basis of previous bursts • exponential average • n+1 =  tn+ (1-)n, where • n+1 - predicted value, tn – actual nth burst • extreme cases •  = 0 • n+1 = n- recent history does not count •  = 1 • n+1 =  tn- only last CPU burst counts

  5. Scheduling Policy system oriented: • maximize CPU utilization – scheduler needs to keep CPU as busy as possible. Mainly, the CPU should not be idle if there are processes ready to run • maximize throughput – number of processes completed per unit time • ensure fairness of CPU allocation • should avoid starvation – process is never scheduled • minimize overhead – incurred due to scheduling • in time or CPU utilization (e.g. due to context switches or policy computation) • in space (e.g. data structures) user-oriented: • minimize turnaround time – interval from time process becomes ready till the time it is done • minimize average and worst-case waiting time – sum of periods spent waiting in the ready queue • minimize average and worst-case response time – time from process entering the ready queue till it is first scheduled

  6. Example 1 Process Process P3 P2 P1 P1 P2 P3 Arrival Order Arrival Order 3 3 24 24 3 3 Burst Time Burst Time 0 0 0 0 0 0 Arrival Time Arrival Time average waiting time = (0 + 24 + 27) / 3 = 17 P3 P2 P1 P1 P2 P3 Example 2 0 3 6 30 0 24 27 30 average waiting time = (0 + 3 + 6) / 3 = 3 First Come First Served (FCFS) add to the tail of the ready queue, dispatch from the head

  7. FCFS Evaluation • non-preemptive • response time — may have variance or be long • convoy effect – one long-burst process is followed by many short-burst processes, short processes have to wait a long time • throughput — not emphasized • fairness — penalizes short-burst processes • starvation — not possible • overhead — minimal

  8. Round-Robin (RR) • preemptive version of FCFS • policy • define a fixed time slice (also called a time quantum) – typically 10-100ms • choose process from head of ready queue • run that process for at most one time slice, and if it hasn’t completed or blocked, add it to the tail of the ready queue • choose another process from the head of the ready queue, and run that process for at most one time slice … • implement using • hardware timer that interrupts at periodic intervals • FIFO ready queue (add to tail, take from head)

  9. time slice = 4 Example 1 Process P1 P2 P3 Arrival Order 24 3 3 Burst Time 0 0 0 Arrival Time average waiting time=(4 + 7 + (10–4)) / 3 = 5.66 P1 P2 P3 P1 P1 P1 P1 P1 0 4 7 10 14 18 22 26 30 Example 2 Process P3 P2 P1 Arrival Order P3 P2 P1 P1 P1 P1 P1 P1 0 3 6 10 14 18 22 26 30 3 3 24 Burst Time average waiting time = (0 + 3 + 6) / 3 = 3 0 0 0 Arrival Time RR Examples

  10. RR Evaluation • preemptive (at end of time slice) • response time — good for short processes • long processes may have to waitn*q time units for another time slice • n = number of other processes,q = length of time slice • throughput — depends on time slice • too small — too many context switches • too large — approximates FCFS • fairness — penalizes I/O-bound processes (may not use full time slice) • starvation — not possible • overhead — somewhat low

  11. Process P1 P2 P3 P4 Arrival Order 6 8 7 3 Burst Time 0 0 0 0 Arrival Time SJF P4 P1 P3 P2 0 3 9 16 24 average waiting time = (0 + 3 + 9 + 16) / 4 = 7 FCFS, same example P1 P2 P3 P4 0 6 14 21 24 average waiting time = (0 + 6 + 14 + 21) / 4 = 10.25 Shortest Job First (SJF) choose the process that has the smallest CPU burst, and run that process non-preemptively

  12. SJF Evaluation • non-preemptive • response time — okay (worse than RR) • long processes may have to wait until a large number of short processes finish • provably optimal average waiting time — minimizes average waiting time for a given set of processes (if preemption is not considered) • average waiting time decreases if short and long processes are swapped in the ready queue • throughput — better than RR • fairness — penalizes long processes • starvation — possible for long processes • overhead — can be high (requires recording and estimating CPU burst times)

  13. Shortest Remaining Time (SRT) • preemptive version of SJF • policy • choose the process that has the smallest next CPU burst, and run that process preemptively • until termination or blocking, or • until another process enters the ready queue • at that point, choose another process to run if one has a smaller expected CPU burst than what is left of the current process’ CPU burst

  14. Process P1 P2 P3 P4 Arrival Order 8 4 9 5 Burst Time 0 1 2 3 Arrival Time SJF arrival arrival P2 P2 P3 P3 P4 P4 P1 P2 P4 P3 0 8 12 17 26 P1 P2 P3 P4 average waiting time = (0 + (8–1) + (12–3) + (17–2)) / 4 = 7.75 SRT 1 P P2 P4 P1 P3 P1 P2 P3 P4 0 5 10 17 24 average waiting time = ((0+(10–1)) + (1–1) + (17–2) + (5–3)) / 4 = 6.5 SRT Examples

  15. SRT Evaluation • preemptive (at arrival of process into ready queue) • response time — good (still worse than RR) • provably optimal waiting time • throughput — high • fairness — penalizes long processes • note that long processes may eventually become short processes • starvation — possible for long processes • overhead — can be high (requires recording and estimating CPU burst times)

  16. Priority Scheduling • policy: • associate a priority with each process • externally defined • ex: based on importance – employee’s processes given higher preference than visitor’s • internally defined, based on memory requirements, file requirements, CPU requirements vs. I/O requirements, etc. • SJF is priority scheduling, where priority is inversely proportional to length of next CPU burst • run the process with the highest priority • evaluation • starvation — possible for low-priority processes

  17. Multilevel Queue • use several ready queues,and associate a different prioritywith each queue • schedule either • the highest priority occupied queue • problem: processes at low-level queues may starve • each queue gets a certain amount of CPU time • assign new processes permanently to a particular queue • foreground, background • system, interactive, editing, computing • each queue has its own scheduling discipline • example: two queues • foreground 80%, using RR • background 20%, using FCFS

  18. Multilevel Feedback Queue feedback – allow scheduler to moveprocesses between queuesto ensure fairness • aging – moving older processes to higher-priority queue • decay – moving older processes tolower-priority queue example • three queues, feedback with process decay • Q0 – RR with time slice of 8 milliseconds • Q1 – RR with time slice of 16 milliseconds • Q2 – FCFS • scheduling • new job enters queue Q0; when it gains CPU, job receives 8 milliseconds If it does not finish in 8 milliseconds, job is moved to queue Q1 • in Q1 job is again served RR and receives 16 additional milliseconds • if it still does not complete, it is preempted and moved to queue Q2.

  19. Traditional Unix CPU Scheduling • avoid sorting ready queue by priority – expensive. Instead: • multiple queues (32), each with a priority value - 0-127 (low value = high priority): • kernel processes (or user processes in kernel mode) the lower values (0-49) – kernel processes are not preemptible! • user processes have higher value (50-127) • choose the process from the occupied queue with the highest priority, and run that process preemptively, using a timer (time slice typically around 100ms) • RR in each queue • move processes between queues • keep track of clock ticks (60/second) • once per second, add clock ticks to priority value • also change priority based on whether or not process has used more than it’s “fair share” of CPU time (compared to others) • users can decrease priority

More Related