350 likes | 359 Views
Explore the concepts of threads vs processes, OS thread implementation, scheduling mechanisms, and examples in C/C++ and Java. Learn about kernel threads, user-level threads, advantages, and disadvantages, as well as hybrid models. Discover load balancing strategies in thread management.
E N D
Operating SystemsCMPSCI 377Lecture 5: Threads & Scheduling Emery Berger University of Massachusetts Amherst
Last Time: Processes • Process = unit of execution • Process control blocks • Process state, scheduling info, etc. • New, Ready, Waiting, Running, Terminated • One at a time (on uniprocessor) • Change by context switch • Multiple processes: • Communicate by message passing or shared memory
This Time: Threads & Scheduling • What are threads? • vs. processes • Where does OS implement threads? • User-level, kernel • How does OS schedule threads?
Processes versus Threads • Process = • Control + address space + resources • fork() • Thread = • Control only • PC, stack, registers • pthread_create() • One process may contain many threads
Threads Diagram • Address space in process:shared among threads • Cheaper, faster communication than IPC
Threads Example, C/C++ • POSIX threads standard #include <pthread.h> #include <stdio.h> void * run (void * d) { int q = ((int) d); int v = 0; for (int i = 0; i < q; i++) { v = v + expensiveComputation(i); } return (void *) v; } main() { pthread_t t1, t2; int r1, r2; pthread_create (&t1, run, 100); pthread_create (&t2, run, 100); pthread_wait (&t1, (void *) &r1); pthread_wait (&t2, (void *) &r2); printf (“r1 = %d, r2 = %d\n”, r1, r2); }
Threads Example, Java import java.lang.*; class Worker extends Thread implements Runnable { public Worker (int q) { this.q = q; this.v = 0; } public void run() { int i; for (i = 0; i < q; i++) { v = v + i; } } public int v; private int q; } public class Example { public static void main(String args[]) { Worker t1 = new Worker (100); Worker t2 = new Worker (100); try { t1.start(); t2.start(); t1.join(); t2.join(); } catch (InterruptedException e) {} System.out.println ("r1 = " + t1.v + ", r2 = " + t2.v); } }
Classifying Threaded Systems • One or many address spaces, one or many threads per address space MS-DOS
Classifying Threaded Systems • One or many address spaces, one or many threads per address space MS-DOS Embedded systems
Classifying Threaded Systems • One or many address spaces, one or many threads per address space UNIX, Ultrix, MacOS (< X), Win95 MS-DOS Embedded systems
Classifying Threaded Systems • One or many address spaces, one or many threads per address space UNIX, Ultrix, MacOS (< X), Win95 MS-DOS Mach,Linux, Solaris, WinNT Embedded systems
This Time: Threads • What are threads? • vs. processes • Where does OS implement threads? • User-level, kernel • How does CPU schedule threads?
Kernel Threads • Kernel threads: scheduled by OS • A.k.a. lightweight process (LWPs) • Switching threads requires context switch • PC, registers, stack pointers • BUT: no mem mgmt. = no TLB “shootdown” • Switching faster than for processes • Hide latency (don’t block on I/O) • Can be scheduled on multiple processors
User-Level Threads • No OS involvement w/user-level threads • Only knows about process containing threads • Use thread library to manage threads • Creation, synchronization, scheduling • Example: Java green threads • Cannot be scheduled on multiple processors
User-Level Threads: Advantages • No context switch when switching threads • But… • Flexible: • Allow problem-specific thread scheduling policy • Computations first, service I/O second, etc. • Each process can use different scheduling algorithm • No system calls for creation, context switching, synchronization • Can be much faster than kernel threads
User-Level Threads: Disadvantages • Requires cooperative threads • Must yield when done working (no quanta) • Uncooperative thread can take over • OS knows about processes, not threads: • Thread blocks on I/O: whole process stops • More threads ≠ more CPU time • Process gets same time as always • Can’t take advantage of multiple processors
Solaris Threads • Hybrid model: • User-level threads mapped onto LWPs
Threads Roundup • User-level threads • Cheap, simple • Not scheduled, blocks on I/O, single CPU • Requires cooperative threads • Kernel-level threads • Involves OS – time-slicing (quanta) • More expensive context switch, synch • Doesn’t block on I/O, can use multiple CPUs • Hybrid • “Best of both worlds”, but requires load balancing
threads thread scheduler thread scheduler processes kernel processors Load Balancing • Spread user-level threads across LWPs so each processor does same amount of work • Solaris scheduler: only adjusts load when I/O blocks
Load Balancing • Two classic approaches:work sharing & work stealing • Work sharing:give excess work away • Can waste time
Load Balancing • Two classic approaches:work sharing &work stealing • Work stealing:get threads from someone else • Optimal approach • Sun, IBM Java runtime • but what about OS?
This Time: Threads • What are threads? • vs. processes • Where does OS implement threads? • User-level, kernel • How does OS schedule threads?
Scheduling • Overview • Metrics • Long-term vs. short-term • Interactive vs. servers • Example algorithm: FCFS
Scheduling • Multiprocessing: run multiple processes • Improves system utilization & throughput • Overlaps I/O and CPU activities
Scheduling Processes • Long-term scheduling: • How does OS determinedegree of multiprogramming? • Number of jobs executing at once • Short-term scheduling: • How does OS select program from ready queue to execute? • Policy goals • Policy options • Implementation considerations
Short-Term Scheduling • Kernel runs scheduler at least: • When process switches from running to waiting • On interrupts • When processes are created or terminated • Non-preemptive system: • Scheduler must wait for these events • Preemptive system: • Scheduler may interrupt running process
Comparing Scheduling Algorithms • Important metrics: • Utilization = % of time that CPU is busy • Throughput = processes completing / time • Response time = time between ready & next I/O • Waiting time = time process spends on ready queue
Scheduling Issues • Ideally: • Maximize CPU utilization, throughput &minimize waiting time, response time • Conflicting goals • Cannot optimize all criteria simultaneously • Must choose according to system type • Interactive systems • Servers
Scheduling: Interactive Systems • Goals for interactive systems: • Minimize average response time • Time between waiting & next I/O • Provide output to user as quickly as possible • Process input as soon as received • Minimize variance of response time • Predictability often important • Higher average better than low average,high variance
Scheduling: Servers • Goals different than for interactive systems • Maximize throughput (jobs done / time) • Minimize OS overhead, context switching • Make efficient use of CPU, I/O devices • Minimize waiting time • Give each process same time on CPU • May increase average response time
Scheduling Algorithms Roundup • FCFS: • First-Come, First-Served • Round-robin: • Use quantum & preemption to alternate jobs • SJF: • Shortest job first • Multilevel Feedback Queues: • Round robin on each priority queue • Lottery Scheduling: • Jobs get tickets • Scheduler randomly picks winner
Scheduling Policies FCFS (a.k.a., FIFO = First-In, First-Out) • Scheduler executes jobs to completionin arrival order • Early version: jobs did not relinquish CPU even for I/O • Assume: • Runs when processes blocked on I/O • Non-preemptive
FCFS Scheduling: Example • Processes arrive 1 time unit apart:average wait time in these three cases?
FCFS:Advantages & Disadvantages • Advantage: Simple • Disadvantages: • Average wait time highly variable • Short jobs may wait behind long jobs • May lead to poor overlap of I/O & CPU • CPU-bound processes force I/O-bound processesto wait for CPU • I/O devices remain idle
Summary • Thread = single execution stream within process • User-level, kernel-level, hybrid • No perfect scheduling algorithm • Selection = policy decision • Base on processes being run & goals • Minimize response time • Maximize throughput • etc. • Next time: much more on scheduling