1 / 49

Lecture 4

Lecture 4. Chapter 5 CPU scheduling. Basic Concepts. Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming. CPU idle :waiting time is wasted. Scheduling is a fundamental OS function. Scheduling is central to OS design. Basic Concepts.

sykesj
Download Presentation

Lecture 4

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 4 Chapter 5 CPU scheduling

  2. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming. CPU idle :waiting time is wasted. Scheduling is a fundamental OS function. Scheduling is central to OS design.

  3. Basic Concepts • CPU schedule depends on a property of processes (process execution). • A process execution consists of a cycle of CPU execution and I/O wait. • CPU burst: a time interval when a process uses CPU only. • I/O burst: a time interval when a process uses I/O devices only.

  4. Basic Concepts • Process execution begins with a CPU burst followed by I/O burst, followed by another CPU burst, then another I/O burst and so on. • The final CPU burst ends with a request to terminate execution.

  5. Alternating Sequence of CPU And I/O Bursts

  6. CPU Scheduler When the CPU idle , operating system must select one process from the ready Queue to be executed. The selection process is carried out by the short-term scheduler (CPU scheduler). There are many implementations for the ready queue to select from the processes: FIFO queue, priority queue, tree, or unordered linked list. The Record in the queue are generally process control block PCBs of the processes.

  7. CPU Scheduler • The OS takes a CPU scheduling decision • when a process: 1. Switches from running to waiting state. 2. Switches from running to ready state. 3. Switches from waiting to ready. 4. Terminates.

  8. CPU Scheduler There are two types CPU scheduling algorithm (When the OS take the decision for CPU scheduling and select a process from ready queue). Nonpreemptive (cooperative) schedule schema. Preemptive schedule schema.

  9. CPU Scheduler Nonpreemptive (cooperative) schedule schema: in this schema, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating (4) or by switching to waiting state (1). This scheduling method was used by Windows 3.x. All other scheduling algorithms (apply 2 and 3) are preemptive. All subsequent versions of Windows are preemptive.

  10. Dispatcher • Dispatcher module (program) gives control of the CPU to the process selected by the short-term scheduler. This function involves the following: • switching context. • switching to user mode. • jumping to the proper location in the user program to restart that program. • Dispatch latency – time it takes for the dispatcher to stop one process and start another running.

  11. Scheduling Criteria • There are different CPU-scheduling algorithms. • There are different properties (criteria) for each algorithm. • CPU utilization – keep the CPU as busy as possible. • Throughput – # of processes that complete their execution per time unit. • Turnaround time – amount of time to execute a particular process.

  12. Scheduling Criteria Waiting time – amount of time a process has been waiting in the ready queue. Response time – amount of time it takes from when a request was submitted until the firstresponse is produced, not output (for time-sharing environment)

  13. Scheduling Algorithm Optimization Criteria Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time

  14. Scheduling Algorithms First-Come, First-Served Scheduling. Shortest-Job-First Scheduling. Priority Scheduling. Round-Robin Scheduling. Multilevel Queue Scheduling. Multilevel Feedback Queue Scheduling.

  15. First-Come, First-Served Scheduling

  16. First-Come, First-Served (FCFS) Scheduling • By far the simplest CPU-scheduling algorithm is the first-come, first-served (FCFS) scheduling algorithm. • With this algorithm, the process that requests the CPU first is allocated the CPU first. • The implementation of the FCFS policy is easily managed with a FIFO queue. • When a process enters the ready queue, its PCB is linked onto the tail of the queue.

  17. First-Come, First-Served (FCFS) Scheduling • When the CPU is free, it is allocated to the process at the head of the queue. • The running process is then removed from the queue. • FCFS is non-preemptive scheduling algorithm. • It’s simple to write and understand. • The disadvantage of this algorithm is that the Average waiting time in the ready queue is long if processes require short CPU burst waits behind long ones.

  18. P1 P2 P3 0 24 27 30 First-Come, First-Served (FCFS) Scheduling ProcessBurst Time in ms P1 24 P2 3 P3 3 Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17

  19. P2 P3 P1 0 3 6 30 FCFS Scheduling (Cont) Suppose that the processes arrive in the order P2 , P3 , P1 The Gantt chart for the schedule is: Waiting time for P1 = 6;P2 = 0; P3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case

  20. FCFS Scheduling (another example) EXAMPLE DATA: Process Arrival Service Time Time 1 0 8 2 1 4 3 2 9 4 3 5 FCFS P1 P2 P3 P4 0 8 12 21 26 Average wait = ( (0-0) + (8-1) + (12-2) + (21-3) )/4 = 0+ 7+10+18 35/4 = Residence Time at the CPU

  21. Shortest-Job-First (SJF) Scheduling

  22. Shortest-Job-First (SJR) Scheduling • Associate with each process the length of its next CPU burst (the process’s time with the CPU). Use these lengths to schedule the process with the shortest time. • When the CPU is free, it is allocated to the process that has the smallest next CPU burst. • If the next CPU bursts of two processes are the same, FCFS is used to break the tie.

  23. P3 P2 P4 P1 3 9 16 24 0 Example of SJF ProcessBurst Time P1 6 P2 8 P3 7 P4 3 (smallest burst time) • SJF scheduling chart • Average waiting time = (3 + 16 + 9 + 0) / 4 = 7

  24. The same example using FCFS • P1: 0 • P2: 6 • P3: 14 • P4: 21 • Average: 41/4= 10.25

  25. Shortest-Job-First (SJR) Scheduling • The Key concept of this algorithm is that the CPU allocated to the process with least CPU burst time. • This algorithm is considered to be an optimal algorithm, as it gives minimum average waiting time.

  26. Shortest-Job-First (SJR) Scheduling • Two schemes: • nonpreemptive – once CPU given to the process it cannot be preempted until completes its CPU burst. • preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF).

  27. P1 P3 P2 P4 0 3 7 8 12 16 Example of Non-Preemptive SJF Process Arrival TimeBurst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 • SJF (non-preemptive) • Average waiting time = (0 + 6 + 3 + 7)/4 - 4

  28. P1 P2 P3 P2 P4 P1 11 16 0 2 4 5 7 Example of Preemptive SJF Process Arrival TimeBurst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 • SJF (preemptive) • Average waiting time = (9 + 1 + 0 +2)/4 = 3

  29. Determining Length of Next CPU Burst • The Real difficulty with the SJF algorithm is knowing the length of time for which CPU is needed by a process. • Can only estimate the length. • A prediction formula may be used to predict the amount of time for which CPU may be required by a process.

  30. Shortest-Job-First (SJR) Scheduling Optimal for minimizing queueing time, but impossible to implement. Tries to predict the process to schedule based on previous history. • Predicting the time the process will use on its next schedule: t( n+1 ) = w * t( n ) + ( 1 - w ) * T( n ) Here: t(n+1) is time of next burst. t(n) is time of current burst. T(n) is average of all previous bursts . W is a weighting factor emphasizing current or previous bursts. 5: CPU-Scheduling

  31. Priority Scheduling

  32. Priority Scheduling • A priority (integer number) is associated with each process, and the CPU is allocated to the process with the highest priority. • Equal priority processes are scheduled in FCFS. • An SJF algorithm is simply a priority algorithm where the priority p is the inverse of the predicted next burst CPU . The larger CPU burst, the lower priority, and vice versa.

  33. Example for priority • The average waiting time : 6+0+16+18+1= 41/5= 8.2

  34. Priority Scheduling • Priorities can be defined internally or externally. • Internally defined priorities use some quantities, such as time limits, memory requirements, the number of open files, and the average I/O burst to the average CPU burst. • External priorities are set by criteria outside the OS, such as the importance of the process.

  35. Shortest-Job-First (SJR) Scheduling • Two schemes: • nonpreemptive – once CPU given to the process, it cannot be preempted until completes its CPU burst. The new process is put at the head of the ready queue. • preemptive – if a new process arrives with higher priority than the priority of current executing process, it preempts the current.

  36. Priority Scheduling • Problem  Starvation – low priority processes may never execute. • Solution  Aging – is a technique of gradually increasing the priority of processes with low priority that waiting for a long time. • For example, if priorities range from 127 (low) to 0 (high), we could increase the priority of a waiting process by 1 every 15 minutes.

  37. Round Robin (RR)

  38. Round Robin (RR) • RR is designed for time sharing systems. • It’s similar to FCFS, but preemption is added to enable the system to switch between processes. • A small unit of time, called a time quantum or time slice is defined. • A time quantum from 10-100 ms in length. • The ready queue is implemented as FIFO circular queue. • The CPU scheduler allocates the CPU to each process for a time interval 1 time quantum(one time slice with CPU). • New processes are added to the tail of the ready queue.

  39. Round Robin (RR) • The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum, and dispatches the process. • If the process have a CPU burst less than 1 time quantum, it release the CPU. Hence, the scheduler proceed to the next process in the queue. • If the process have a CPU burst greater than 1 time quantum, the timer go off and cause an interrupt to the OS. A context switch will be executed, and the process will be put in the tail of the ready queue.

  40. P1 P2 P3 P1 P1 P1 P1 P1 0 10 14 18 22 26 30 4 7 Example of RR with Time Quantum = 4 ProcessBurst Time P1 24 P2 3 P3 3 • The Gantt chart is: • P1 waits for 6ms, P2 waits for 4ms, P3 waits for 7ms. • Thus average waiting time is 17/3=5.66 ms • Typically, higher average turnaround than SJF, but better response.

  41. P1 P2 P3 P4 P1 P3 P4 P1 P3 P3 0 20 37 57 77 97 117 121 134 154 162 Example of RR with Time Quantum = 20 ProcessBurst Time P1 53 P2 17 P3 68 P4 24 • The Gantt chart is: • Typically, higher average turnaround than SJF, but better response

  42. Example of RR with Time Quantum= 4 EXAMPLE DATA: Process Arrival Service Time Time 1 0 8 2 1 4 3 2 9 4 3 5 Note: Example violates rules for quantum size since most processes don’t finish in one quantum. Round Robin, quantum = 4, no priority-based preemption P1 P2 P3 P4 P1 P3 P4 P3 0 4 8 12 16 20 24 25 26 Average wait = ( (12) + (4-1) + ((8-2)+(20-12)+(25-24)) + ((12-3)+(24-16)) )/4 = 12+3+15+17= 47/4= 11.75

  43. Round Robin (RR) • If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. • No process waits more than (n-1)q time units until its next time quantum. • For example, with 5 processes and a time quantum of 20 ms, each process gets 20 ms every 100 ms. • RR performance depends on the size of the time quantum. If it’s large, RR is the same as FCFS. If it’s small, it’s called processor sharing.

  44. Multilevel Queue

  45. Multilevel Queue • Ready queue is partitioned into separate queues:foreground (interactive)background (batch) • Each queue has its own scheduling algorithm • foreground – RR • background – FCFS • Scheduling must be done between the queues • Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. • Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR • 20% to background in FCFS

  46. Multilevel Queue Scheduling

  47. Multiple-Processor Scheduling • CPU scheduling more complex when multiple CPUs are available • Some approaches for multiprocessor scheduling • Asymmetric multiprocessing: only one processor accesses the system data structures, alleviating the need for data sharing and scheduling decisions are handled by this processor. • Symmetric multiprocessing (SMP): Each processor is self scheduling. All processes may be in a common ready queue or each processor may have its own ready queue

  48. Some issues related to SMP • Processor affinity: • When a process running on a processor is migrated to another processor, the contents of the cache memory for that processor must be invalidated and repopulated to the cache of the processor to which the process is migrated. This involves large overhead. To prevent this overhead, most SMP systems try to avoid migration of processes from one processor to another processor and instead keep a process running on the same processor. This is called processor affinity. • Load balancing: This attempts to keep workload evenly distributed across all processors. This is especially needed in systems where each processor has its own queue which is the case in most contemporary Operating systems. Note that load balancing counteracts the benefits of processor affinity. So, this is not an easy problem to solve.

  49. Operating System Examples • Example: Solaris Scheduling : • Solaris uses priority-based thread scheduling • Example: Windows Scheduling • Windows schedules threads using a priority-based preemptive scheduling algorithm.

More Related