520 likes | 711 Views
Threads. Context switch between processes is an expensive operation. It leads to high overhead A thread is an alternative model for execution of a program that incurs smaller overhead while switching between threads of the same application Q: How is this achieved ?. Thread switching overhead.
E N D
Threads • Context switch between processes is an expensive operation. It leads to high overhead • A thread is an alternative model for execution of a program that incurs smaller overhead while switching between threads of the same application Q: How is this achieved?
Thread switching overhead • A process context switch involves: 1. saving the context of the process in operation 2. saving its CPU state 3. loading the state of the new process 4. loading its CPU state
A threads is a program execution within the context of a process (i.e., it uses the resources of a process); many threads can be created within the same process • Switching between threads of the same process involves much less switching overhead
PROCESS • B THREADS
OOP WITH C++ & JAVA – D SAMANTA –PHI(B) • MULTITHREADING MEANS MULTIPLE FLOW OF CONTROL -- 2 ADVANTAGES • 1. BETTER UTLIZN OF RES • 2. SEVERAL PROBS SOLVED BETTER BY MULTITHREADS - EXS
EXS(B) • PROGRAM TO DISPLAY ANIMATION, DOCUMENTS TO PLAY MUSIC, & DOWNLOAD FILES FROM THE NET AT THE SAME TIME • NETWORK SIMULATION EX –ADHOC NETW
B • JAVA IS A MULTITHREADED LANG • BECAUSE JAVA THRS RUN IN THE SAME MEM SPACE, THEY CAN EASILY COMMN AMONG THEMSELVES, BECAUSE AN OBJECT IN ONE THR CAN CALL A METHOD IN ANOTHER THR WITHOUT ANY OVERHEAD REQ FROM THE OS
B • T1=new testthread(“thread1”, (int) (math.random()*2000)); • T2, T3 • T2.start(); • T1.start(); • T3.start();
B • Start(), stop(), suspend(), resume(); sleep(int n), setPriority(int p), yield() – yield method causes the run time to switch the context from the current thr to the next available runnable thr. • SPECIALITY –FACILITIES AT THE LANG LEVEL –USER CONTROL
Threads • Where are threads useful? • If two processes share the same address space and the same resources, they have identical context • switching between these processes involves saving and reloading of their contexts. This overhead is redundant. • In such situations, it is better to use threads.
Different kinds of threads • Kernel-level threads: Threads are created through system calls. The kernel is aware of their existence, and schedules them • User-level threads: Threads are created and maintained by a thread library, whose routines exist as parts of a process. Kernel is oblivious of their existences. • Hybrid threads: A combination of the above.
Q: Why three kinds of threads? A: Consider switching overhead, concurrency/parallelism
EVENT SAVE THREAD STATE EVENT PROCESSING SCHEDULING YES THREAD OF SAME PR ? NO SAVE PR CONTEXT LOAD NEW CONTEXT DISPATCH THREAD
Kernel-level and user-level threads • Thread switching overhead • Concurrency and parallelism
Actions of the threads library: N, R, B indicate running, ready and blocked --- this schematic should be replaced by the schematic of Ex. 3.6.
Hybrid thread models • Hybrid threads have elements of both kernel-level and user-level threads • Each user-level thread has a thread control block (TCB) • Each kernel-level thread has a kernel thread control block (KTCB) • Three models of associating user-level and kernel-level threads • One-to-one association • Many-to-one association • Many-to-many association
Hybrid thread models • This schematic should be added (Fig. 3.15)
Process Interactions Processes may interact in four different ways: • Sharing of data – Data consistency should be preserved • Synchronization of actions– Processes should perform their actions in a desired order • Passing of messages – Messages should be stored and delivered appropriately
4. Sending of Signals – Processes should be able to send signals to other processes and specify signal handling actions when signals are sent to them The OS performs message passing and provides facilities for the other three modes of interaction
An example of data sharing: airline reservations • MULTIPLE PROCESSES –EACH PR SERVICING ONE AGENT TERMINAL
RACE CONDITIONS(B) • LET Ds BE A SHARED VAR • Ai & Aj BE PRS WHICH SHARE THIS VAR & OPERATE CONCURRENTLY • Ai : Ds=Ds+10; • Aj : Ds=Ds+5; • BECAUSE OF CONCURRENCY THERE IS UNCERTAINTY –RACE CONDN • SOLUTION – MUTUAL EXCLN
Race conditions in data sharing Results of operations performed on shared data may be wrong if race conditions exist. Let us see why this may be so • Let operations Oi and Oj update value of a shared data d, let fi(d) and fj(d) represent the value of d after the operations • If processes Pi and Pj perform operations Oi and Oj • If Oi is performed before Oj, resulting value of d is fi(fj(d)) • If Oj is performed before Oi, resulting value of d is fj(fi(d))
Race conditions in data sharing • A race condition is a situation in which the result of performing operations Oi and Oj in two processes is neither fi(fj(d)) nor fj(fi(d)). • Q: Why do race conditions exist?
Race condition in the airline reservations system • Race conditions in the airline reservations system may have two consequences: • nextseatno may not be updated properly • Same seat number may be allocated to two passengers
Control synchronization between processes: Operation sj should be performed after si TIME
(FIG a) PR Pi EXECUTES Si 1ST & THEN Pj EXECUTES Sj • (FIG b) PR Pj IS NOT ALLOWED TO EXEC Sj TILL Pi EXECUTES Si
EX 4.6(1ST EDN) – DESIGN TO REDUCE THE TIME OF EXECN • Y=HCF(Amax, X) WHERE A IS ARRAY OF N ELEMENTS • INSERT Y IN ARR • ARR A IN ASC ORDER • TO DET NO. OF PRS & COMPUTATIONS IN EACH PR • PROBLEM SPLIT INTO FOLL STEPS
READ N ELMENTS OF ARR A • FIND Amax • READ X • DET Y=HCF(Amax, X) • INCLUDE Y IN ARR A & ARR IN ASC ORDER • EACH ST IS CONSIDERED TO BE A SEPARATE PR –THEN DET WHICH OF THESE PRS INTERACT • 2, 4, 5 ARE INTERACTING PRS • CONC CAN BE ACHIEVED BY SPLITTING 2 & 5 INTO TWO PARTS
2(a) COPY ARR A TO ARR B 2(b) FIND Amax 5(a) ARRANGE ARR B IN ASC ORDER 5(b) INCLUDE Y IN ARR B AT APPROPR PLACE
P1 READ ELS OF A COPY A TO B P2 FIND Amax P3 READ X P6 INCLUDE Y IN ARR B P4 Y=HCF(Amax, X) P5 ARR B IN ASC ORDER
WHICH ARE THE PRS THAT CAN RUN CONCURRENTLY ? • P1, P3 –C • P1 & P2 CANNOT RUN CONC AS THEY SHARE ARR A • P2 MUST GO AFTER P1 FINISHES • P4 CAN BE INITD ONLY AFTER P1 & P3 TERMINATE • P5 CAN BE INITD AFTER P1 TERMS, WHILE P6 CAN START ONLY AFTER P4 & P5 TERMINATE
IMPLEMENTATION OF INTERACTING PROCESSESFORK - JOIN & PARBEGIN - PAREND • FORK SPAWNS A NEW PR & JOIN WAITS FOR A PREVIOUSLY CREATED PR TO TERMINATE
PR 0 FORK A,J,3 A PR 0 FORK B B JOIN J PR 1 PR 2 JOIN J JOIN J J J+1 PR i
FOR I= 1 TO 100 • READ A[I]; • M=3; • FORK LAB1; • FORK LAB2; • GOTO LAB3; • LAB1: X=MIN(A); • GOTO LAB3; • LAB2: Y=MAX(A); • LAB3: JOIN M; • RESULT=Y/X;
FOR I= 1 TO 100 • READ A[I]; • PARBEGIN • X=MIN(A); • Y=MAX(A); • PAREND • RESULT=Y/X;
Signals • A process can send signals to other processes to convey exceptional situations • A process must anticipate signals from other processes • It must provide a signal handler for each such signal. It specifies the handler through a system call. The kernel notes information about the signal handler • The kernel activates the signal handler when a signal is sent to the process • Schematic of signals on the next slide
Signal handling by process Pi(a) Initialization, (b) Signal processing
Processes in Unix • A process operates in two modes---user mode and kernel mode. • When a process makes a system call, it enters the kernel mode and itself executes the kernel code for handling the system call • It reenters the user mode after handling the system call • Hence two running states: User running and kernel running • A process in the kernel running state is non-interruptible • A process in kernel running mode gets blocked when it makes an I/O request • Kernel code is written in a reentrant manner so that a process can enter the kernel mode even if other processes are blocked in that mode
Threads in Solaris • Provides three entities for concurrency: • User threads: Managed by a threads library • Light weight processes (LWP): A unit of parallelism within a process. Threads library maps user threads into LWPs. Several LWPs may be created within a process. • Kernel threads: A kernel thread is associated with each LWP. The kernel also creates some kernel threads for its own use, e.g., a thread to handle disk I/O. • Mapping between threads and LWPs influences parallelism (see Hybrid models’ schematic)
Processes and threads in Linux • Linux supports kernel-level threads • Threads and processes are treated alike except at creation • A thread shares the information about memory management, current directory, open files and signal handlers of its parent process; a process does not share any information of its parent • A thread or process contains information about • Its parent • Its deemed parent, to whom its termination should be reported
Processes and threads in Linux • Process and thread states: • Task_running: scheduled or waiting to be scheduled • Task_interruptible: sleeping on an event, but may receive a signal • Task_uninterruptible: sleeping and may not receive a signal • Task_stopped: operation has been stopped by a signal • Task_zombie: operation completed, but its parent has not issued a system call to check whether it has terminated • Interruptibility simplifies implementation of signals