360 likes | 488 Views
CE01000-3 Operating Systems. Lecture 7 Threads & Introduction to CPU Scheduling. Timetable change for this week only. Group 1 Tuesday 12-2pm K106 Group 2 Friday 11am-1pm in K006 Group 3 Thursday 11am-1pm in K006. Overview of lecture. In this lecture we will be looking at
E N D
CE01000-3 Operating Systems Lecture 7 Threads & Introduction to CPU Scheduling
Timetable change for this week only. • Group 1 Tuesday 12-2pm K106 • Group 2 Friday 11am-1pm in K006 • Group 3 Thursday 11am-1pm in K006
Overview of lecture In this lecture we will be looking at What is a thread? thread types CPU/IO burst cycle CPU scheduling - preemptive & nonpreemptive dispatcher scheduling criteria First Come First Served (FCFS) algorithm Shortest Job First (SJF) algorithm
Threads - analogy • Analogy: • Process is like a manual of procedures (code), sets of files and paper (memory), and other resources. CPU is like a person who carries out (executes) the instructions in the manual of procedures • CPU (person) may be ‘context switched’ from doing one task to doing another
Threads – analogy (Cont.) • A thread consists of a bookmark in the manual of procedures (program counter value), and pad of paper that is used to hold information that is currently being used (register and stack values) • it is possible for a single processto have a number of bookmarks in the manual with a pad of paper associated with each bookmark (a number of threads within a process)
Threads - analogy (Cont.) • the person (CPU) could then switch between doing one thing in the manual of procedures (executing one thread) to doing another thing somewhere else (start executing another thread) • This switching between threads is different from context switching between processes - it is quicker to switch between threads in a process
Threads • A thread exists as the current execution state of a process consisting of: • program counter, processor register values and stack space • it is called a thread because of the analogy between a thread and a sequence of executed instructions (imagine drawing a line through each line of instructiuins in the manual of procedures (code) when it has been executed - you get a thread (line) through the manual (code)
Threads (Cont.) • A thread is often called a lightweight process • there can be multiple threads associated with a single process • each thread in a process shares with other peer threads the following: • code section, data section, operating-system resources • all threads collectively form a task
Threads (Cont.) • A traditional process is equal to a task with one thread i.e. processes used to only have a single thread • Overhead of switching between processes is expensive especially with more complex operating systems - threads reduce switching overhead and improve granularity of concurrent operation
Threads (Cont.) • Example in use: • In a multiple threaded task, while one server thread is blocked and waiting, a second thread in the same task can run. • Cooperation of multiple threads in same job confers higher throughput and improved performance. • Threads provide a mechanism that allows sequential processes to make blocking system calls while also achieving parallelism.
Thread types • 2 different thread types: • Kernel-supported threads (e.g. Mach and OS/2) - kernel of O/S sees threads and manages switching between threads • i.e. in terms of analogy boss (OS) tells person (CPU) which thread in process to do next.
Thread types (Cont.) • User-level threads - supported above the kernel, via a set of library calls at the user level. Kernel only sees process as whole and is completely unaware of any threads • i.e. in terms of analogy manual of prcedures (user code) tells person (CPU) to stop current thread and start another (using library call to switch threads)
Introduction to CPU Scheduling • Topics: • CPU-I/O burst cycle • Preemptive, nonpreemptive • dispatcher • Scheduling Criteria • Scheduling Algorithms -some this lecture, the rest next lecture. This lecture: • First come first served (FCFS) • Shortest Job First (SJF)
CPU-I/O Burst Cycle (Cont.) • CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait. • CPU burst is length of time process needs to use CPU before it next makes a system call (normally request for I/O). • I/O burst is the length of time process spends waiting for I/O to complete.
Histogram of CPU-burst Times Typical CPU burst distribution
CPU Scheduler • Allocates CPU to one of processes that are ready to execute (in ready queue) • CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state (e.g. when I/O request) 2. Terminates 3. Switches from waiting to ready(e.g. on I/O completion) 4. Switches from running to ready state(e.g.Timer interrupt)
CPU Scheduler (Cont.) • If scheduling occurs only when 1 and 2 happens it is called nonpreemptive - process keeps CPU until it voluntarily releases it (process termination or request for I/O) • If scheduling also occurs when 3 & 4 happen it is called preemptive - CPU can be taken away from process by OS (external I/O interrupt or timer interrupt)
Dispatcher • Dispatcher gives control of the CPU to the process selected by the short-term scheduler; this involves: • switching context • switching to user mode • jumping to the proper location in the user program to restart that program (i.e. last action is to set program counter)
Dispatcher (Cont.) • Dispatch latency – time it takes for the dispatcher to switch between processes and start new one running
Scheduling Criteria • CPU utilisation i.e. CPU usage - to maximise • Throughput = number of processes that complete their execution per time unit - to maximise • Turnaround time = amount of time to execute a particular process - to minimise
Scheduling criteria (Cont.) • Waiting time = amount of time a process has been waiting in the ready queue - to minimise • Response time = amount of time it takes from when a job was submitted until it initiates its first response (output), not to time it completes output of its first response - to minimise
First-Come, First-Served (FCFS) Scheduling • Schedule = order of arrival of process in ready queue Example: ProcessBurst Time P1 24 P2 3 P3 3 • Suppose that the processes arrive in the order: P1 , P2 , P3.
P1 P2 P3 0 24 27 30 FCFS Scheduling (Cont.) • The Gantt Chart for the schedule then is: • Waiting time for P1 = 0; P2 = 24; P3 = 27 • Average waiting time: (0 + 24 + 27)/3 = 17
P2 P3 P1 0 3 6 30 FCFS Scheduling (Cont.) Suppose that the processes arrive in the order P2 , P3 , P1 . • The Gantt chart for the schedule is: • Waiting time for P1 = 6;P2 = 0; P3 = 3 • Average waiting time: (6 + 0 + 3)/3 = 3
FCFS Scheduling (Cont.) • waiting time usually not minimal and large variance in times • Convoy effect – this is where short process may have a long wait before being scheduled onto CPU due to long process being ahead of them
Shortest-Job-First (SJF) Scheduling • Each process has a next CPU burst - and this will have a length (duration). Use these lengths to schedule the process with the next shortest burst. • Two schemes: 1. non-preemptive – once CPU given to the process it cannot be preempted until completes its CPU burst.
SJF Scheduling (Cont.) 2. Preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is known as Shortest-Remaining-Time-First (SRTF). • SJF is optimal – gives minimum average waiting time for a given set of processes.
Example of Non-Preemptive SJF Process Arrival TimeBurst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 • SJF (non-preemptive) • Average waiting time = (0 + 6 + 3 + 7)/4 = 4 P1 P3 P2 P4 0 3 7 8 12 16
Example of Preemptive SJF ProcessArrival TimeBurst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 • SJF (preemptive) • Average waiting time = (9 + 1 + 0 +2)/4 = 3 P1 P2 P3 P2 P4 P1 11 16 0 2 4 5 7
Determining Length of Next CPU Burst • Can only estimate the length. • Can be done by using the length of previous CPU bursts, using exponential averaging (decaying average).
Examples of Exponential Averaging • =0, n+1 = n • last CPU burst does not count - only longer term history • =1, n+1 = tn • Only the actual last CPU burst counts.
Examples of Exponential Averaging (Cont.) • If we expand the formula, we get: n+1 = tn+(1 - ) tn-1+ … +(1 - )j tn-j+ … +(1 - )n+1 0 • Since both and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor.
References Operating System Concepts. Chapter 4 & 5.