1 / 41

CPU Scheduling and Characteristics of Jobs

This chapter covers CPU scheduling, including the characteristics of jobs, scheduling criteria, and various scheduling algorithms. It also explores implementation issues, real-time scheduling, and the benefits of multi-programming.

kblakeney
Download Presentation

CPU Scheduling and Characteristics of Jobs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 346 – Chapter 6 • CPU scheduling • Characteristics of jobs • Scheduling criteria / goals • Scheduling algorithms  • System load • Implementation issues • Real-time scheduling

  2. Schedule issues • Multi-programming is good!  better CPU utilization • CPU burst concept • Jobs typically alternate between work and wait • Fig. 6.2: Distribution has long tail on right. General questions • How or when does a job enter the ready queue? • How much time can a job use the CPU? • Do we prioritize jobs? • Do we pre-empt jobs? • How do we measure overall performance?

  3. Scheduler • Makes short-term decisions • When? Whenever a job changes state (becomes ready, needs to wait, finishes) • Selects a job on the ready queue • Dispatcher can then do the “context switch” to give CPU to new job • Should we preempt? • Non-preemptive = Job continues to execute until it has to wait or finishes • Preemptive = Job may be removed from CPU while doing work! • When you preempt: need to leave CPU “gracefully”. May be in the middle of a system call or modifying shared data. Often we let that operation complete.

  4. Scheduling criteria • CPU utilization = what % of time CPU is executing instructions • Throughput = # or rate of jobs completed in some time period • Turnaround time = (finish time) – (request time) • Waiting time = how long spent in ready state • Confusing name! • Response time = how long after request that a job begins to produce output • Usually, we want to optimize the “average” of each measure. e.g. Reduce average turnaround time.

  5. Some scheduling algorithms • First-come, first-served • Round robin • Like FCFS, but each job has a limited time quantum • Shortest job next • We use a Gantt chart to view and evaluate a schedule • e.g. compute average turnaround time • Often, key question is – in what order do we execute jobs? • Let’s compare FCFS and SJN…

  6. Example 1 • First-come, first-served • Process 1 can execute from t=0 to t=20 • Process 2 can execute from t=20 to t=50 • Process 3 can execute from t=50 to t=90 • Process 4 can execute from t=90 to t=100 • We can enter this info as extra columns in the table. • What is the average turnaround time? • What if we tried Shortest Job Next?

  7. Example 2 Note that it’s possible to have idle time.

  8. System load • A measure of how “busy” the CPU is • At an instant: how many tasks are currently running or ready. • If load > 1, the system is “overloaded”, and work is backing up. • Typically reported as an average of the last 1, 5, or 15 minutes. • Based on the schedule, can calculate average load as well as maximum (peak) load.

  9. Example 1 • FCFS schedule can also be depicted this way: • What can we say about the load? “Request time” aka “Arrival time”

  10. Example 2 • SJN schedule can be depicted this way: • Load?

  11. Preemptive SJN • If a new job arrives with a shorter execution time (CPU burst length) than currently running process, preempt! • Could also call it “shortest remaining job next” • Let’s redo previous example allowing preemption • Job #1 is unaffected. • Job #2 would have run from 30 to 60, but … • Does preemption reduce average turnaround time? Load?

  12. Estimating time • Some scheduling algorithms like SJN need a job’s expected CPU time • We’re interested in scheduling bursts of CPU time, not literally the entire job. • OS doesn’t really know in advance how much of a “burst” will be needed. Instead, we estimate. • Exponential averaging method. We predict the next CPU burst will take this long: pn+1 = a tn + (1 – a)pn tn = actual time of the nth burst • Formula allows us to weight recent vs. long-term history. • What if a = 0 or 1?

  13. Estimating time (2) • pn+1 = a tn + (1 – a)pn • Why is it called “exponential”? Becomes clearer if we substitute all the way back to the first burst. • p1 = a t0 + (1 – a)p0 • p2 = a t1 + (1 – a)p1 = a t1 + (1 – a) [a t0 + (1 – a)p0 ] = a t1 + (1 – a) a t0 + (1 – a)2 p0 • A general formula for pn+1 will eventually contain terms of the form (1 – a) raised to various powers. • In practice, we just look at previous actual vs. previous prediction • Book’s example Figure 6.3: Prediction eventually converges to correct recent behavior.

  14. Priority scheduling • SJN is a special case of a whole class of scheduling algorithms that assign priorities to jobs. • Each job has a priority value: • Convention: low number = “high” priority • SJN: priority = next predicted burst time • Starvation: Some “low priority” jobs may never execute • How could this happen? • Aging: modify SJN so that while a job waits, it gradually “increases” its priority so it won’t starve.

  15. Round robin • Each job takes a short turn at the CPU • Commonly used, easy for OS to handle • Time quantum typically 10-100 ms – it’s constant • Choice of time quantum has a minor impact on turnaround time (Figure 6.5) • Can re-work an earlier example • Questions to think about: • If there are N jobs, what is the maximum wait time before you can start executing? • What happens if the time quantum is very large? • What happens if the time quantum is very short?

  16. Implementation issues • Multi-level ready queue • Threads • Multi-processor scheduling

  17. Multi-level queue • We can assign jobs to different queues based on their purpose or priority • Foreground / interactive jobs may deserve high priority to please the user • Also: real-time tasks • Background / routine tasks can be given lower priority • Each queue can have its own scheduling regime, e.g. round robin instead of SJN • Interactive jobs may have unpredictable burst times • Key issue: need to schedule among the queues themselves. How?

  18. Scheduling among queues • Classify jobs according to purpose  priority • Priority based queue scheduling • Can’t run any Priority 2 job until all Priority 1 jobs done. • While running Priority 2 job, can preempt if a Priority 1 job arrives. • Starvation • Round robin with different time quantum for each queue • Time share for each queue • Decreasing % of time for lower priorities • Or… Multi-level feedback queue (pp. 275-277) • All jobs enter at Priority 0. Given short time quantum. • If not done, enter queue for Priority 1 jobs. Longer quantum next time.

  19. Thread scheduling • The OS schedules “actual” kernel-level threads • The thread library must handle user threads • One-to-one model – easy, each user thread is already a kernel thread. Direct system call • Many-to-many or many-to-one models • Thread library has 1 or a small number of kernel threads available. • Thread library must decide when user thread should run on a true kernel thread. • Programmer can set a priority for thread library to consider. In other words, threads of the same process are competing among themselves.

  20. Multi-processing • More issues to address, more complex overall • Homogeneous system = identical processors, job can run on any of them • Asymmetric approach = allocate 1 processor for the OS, all others for user tasks • This “master server” makes decisions about what jobs run on the other processors • Symmetric approach (SMP) = 1 scheduler for each processor, usually separate ready queue for each (but could have a common queue for all) • Load balancing: periodically see if we should “pull” or “push” jobs

  21. Affinity • When switched out, a job may want to return next time to the same processor as before • Why desirable? • An affinity policy may be “soft” or “hard”. • Soft = OS will try but not guarantee. Why might an OS prefer to migrate a process to a different processor? • Generalized concept: processor set • For each process, maintain a list of processors it may be run on • Memory system can exploit affinity, allocating more memory that is closer to the favorite CPU. (Fig. 6.9)

  22. Multicore processor • Conceptually similar to multiprocessors • Place multiple “processor cores” on same chip • Faster, consume less power • OS treats each core like a unique CPU • However, the cores often share cache memory • Leads to more “cache misses”  Jobs spend more time stalled waiting for instructions or data to arrive • OS can allocate 2 threads to the same core, to increase processor utilization • Fig. 6.11 shows idealized situation. What happens in general?

  23. Real-time Scheduling • Real-time scheduling • Earliest Deadline First • Rate Monotonic • What is this about? • Primary goal is avoid missing deadlines. Other goals may include having response times that are low and consistent. • We’re assuming jobs are periodic, and the deadline of a job is the end of a period

  24. Real-time systems • Specialized operating system • All jobs potentially have a deadline • Correctness of operation depends on meeting deadlines, in addition to correct algorithm • Often, jobs are periodic; some may be aperiodic/sporadic • Hard real-time = missing a deadline is not acceptable • Soft real-time = deadline miss not end of world, but try to minimize • Number of acceptable deadline misses is a design parameter • We try to measure Quality of Service (QoS) • Examples? • Used in defense, factories, communications, multimedia; embedded in appliances

  25. Features • A real-time system may be used to control specific device • Opening bomb bay door • When to release chocolate into vat • Host device typically very small and lacks features of PC, greatly simplifying OS design • Single user or no user • Little or no memory hierarchy • Simple instruction set (or not!) • No disk drive, monitor • Cheap to manufacture, mass produce

  26. Scheduling • Most important issue in real-time systems is CPU scheduling • System needs to know WCET of jobs  • Jobs are given priority based on their timing/deadline needs • Jobs may be pre-empted • Kernel jobs (implemented system calls) contain many possible preemption points at which they may be safely suspended • Want to minimize latency • System needs to respond quickly to external event, such as change in temperature • Interrupt must have minimum overhead – how to measure it?

  27. EDF • Given a set of jobs • Need to know period and execution time of each • Each job contributes to the CPU’s utilization: execution time divided by the period • If the total utilization of all jobs > 1, no schedule possible! • At each scheduling checkpoint, choose the job with the earliest deadline. • A scheduling checkpoint occurs at t = 0, when a job begins period or is finished, or when a new job arrives into the system • If no new jobs enter the system, EDF is non-preemptive • Sometimes the CPU is idle • Need to compute schedule one dynamic job at a time until you reach the LCM of the job periods • Can predict deadline miss, if any

  28. EDF example • Suppose we have 2 jobs, A and B, with periods 10 and 15, and execution times 5 and 6. • At t = 0, we schedule A because its deadline is earlier (10 < 15). • At t = 5, A is finished. We can now schedule B. • At t = 11, B is finished. A has already started a new period, we can schedule it immediately. • At t = 16, A is finished. B already started a new period, so schedule it. • At t = 22, B is finished. Schedule A. • At t = 27, A is finished, and CPU is idle until t = 30.

  29. EDF: be careful • At certain scheduling checkpoints, you need to schedule the job with the earliest deadline. • As long as that job has started its period. • Do each cycle iteratively until the LCM of the job periods. • Checkpoints include • t = 0 • Whenever a job is finished executing • Whenever a job begins its period (This condition is important when we have maximum utilization.) • Example with 2 jobs • Job A has period 6 and execution time 3. • Job B has period 14 and execution time 7. • U = 1. We should be able to schedule these jobs with EDF.

  30. Example: EDF • Wrong way: ignoring beginning of job periods • At t = 0, we see jobs A (period 0-6) and B (period 0-14) Since A has sooner deadline, schedule A for its 3 cycles. • At t = 3, we see jobs A (period 6-12) and B (period 0-14) Since A hasn’t started its period, our only choice is B, for its 7 cycles. • At t = 10, we have job A (period 6-12) and B (period 14-28) A has sooner deadline. Schedule A for its 3 cycles. • At t = 13, A is finished but it missed its deadline. We don’t want this to happen!

  31. continued • Job A = (per 6, exec 3) Job B = (per 14, exec 7) • Correct EDF schedule that takes into account the start of a job period as another scheduling checkpoint • Notice: • At t = 12 and t = 24, we don’t preempt job B, because B’s deadline is sooner. In the other cases when A’s period begins, A takes the higher priority.

  32. RM • Most often used because it’s easy • Inherently preemptive • Assign each job a fixed priority based on its period • The shorter the period, the more often this job must execute, the more deadlines it has  the higher the priority • Determine in advance the schedule of the highest priority job • Continue for other jobs in descending order of priority • Be sure not to “schedule” a job before its period begins • Less tedious than EDF to compute entire schedule • For highest priority job, you know exactly when it will execute • Other jobs may be preempted by higher priority jobs that were scheduled first

  33. RM (2) • Sometimes not possible to find a schedule • Our ability to schedule is more limited than EDF. • There is a simple mathematical check to see if a RM schedule is possible: • We can schedule if the total utilization is  n (21/n – 1) Proved by Liu and Layland in 1973. • If n (21/n – 1) < U  1, the test is inconclusive. Must compute the schedule to find out. • Ex. If n = 2, we are guaranteed to find a RM schedule if U < 82%, but for 90% it gets risky. • Large experiments using random job parameters show that RM is reliable up to about 88% utilization.

  34. RM Example • Suppose we have 2 jobs, C and D, with periods of 2 and 3, both with execution time 1. • U = 1/2 + 1/3 > 82%, so RM is risky. Let’s try it… • Schedule the more frequent job first. • Then schedule job D. • Looks okay!

  35. RM Example 2 • Let’s look at earlier set of tasks, A and B, with periods of 10 and 15, and execution times of 5 and 6. • U = 5/10 + 6/15 = 0.9, also risky. • Schedule task A first. • Schedule task B into available spaces.

  36. Comparison • Consider this set of jobs • What is the total utilization ratio? Are EDF and RM schedules feasible? • Handout

  37. RM vs. EDF • EDF • Job’s priority is dynamic, hard to predict in advance • Too democratic / egalitarian? Maybe we are trying to execute too many jobs. • RM • Fixed priority is often desirable • Higher priority job will have better response times overall, not bothered by a lower priority job that luckily has an upcoming deadline. • RM cannot handle utilization up to 1 unless periods are in sync, as in 1 : n1 : n1n2 : n1n2n3 : … (Analogy: Telling time is easy until you get to months/years.)

  38. RM example • Let’s return to previous example, this time using RM. • Job A = (per 6, exec 3) Job B = (per 14, exec 7) • Hyperperiod is 42 • First, we must schedule job A, because it has shorter period. • Next, schedule job B. • Uh-oh! During B’s period 0-14, it is only able to execute for 6 cycles. Deadline miss  This job set cannot be scheduled. But it could if either job’s execution time were less  reduce U.

  39. RM Utilization bound • Liu and Layland (1973): “Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment” • They first looked at the case of 2 jobs. What is the maximum CPU utilization that RM will always work? Express U as a function of 1 of the job’s execution time, assuming the other job will fully utilize the CPU during its period. • We have 2 jobs • Job j1 has period T1 = 8 Job j2 has period T2 = 12 • Let’s see what execution times C1 and C2 we can have, and what effect this has on the CPU utilization. • During one of j2’s periods, how many times will j1 start? In general: ceil(T2/T1). In our case, ceil(12/8) = 2. • They derive formulas to determine C2 and U, once we decide on a value of C1.

  40. continued • We have job j1 (T1 = 8) and job j2 (T2 = 12) • Suppose C1 = 2. • C2 = T2 – C1 * (number of times j1 starts) = T2 – C1 * ceil (T2 / T1) = 12 – 2 ceil (12 / 8) = 8. • We can compute U = 2/8 + 8/12 = 11/12. • Suppose C1 = 4 • C2 = 4 • U = 4/8 + 4/12 = 5/6 • The CPU utilization is actually lower as we increase the execution time of j1. • … If the last execution of j1 spills over into the next period of j2, the opposite trend occurs.

  41. Formula • Eventually, Liu and Layland derive this general formula for maximum utilization for 2 jobs: U = 1 – x(1 – x)/(W + x) where W = floor(T2/T1) and x = T2/T1 – floor(T2/T1) • We want to minimize U: to find at what level we can guarantee schedulability. In this case W = 1, so U = 1 – x(1 – x) / (1 + x) • Setting the derivative equal to 0, we get x = √2 – 1, and U(√2 – 1) = 2(√2 – 1) = about 0.83 • Result can be generalized to n jobs: U = n(2^(1/n) – 1)

More Related