440 likes | 667 Views
Process. Subject :- Operating System Topics : - Process & Thread. Process Concept. Computer do several things at the same time. i.e. multitasking, edit a document, listen music, playing a game etc. For example Consider, web server.
E N D
Process Subject :- Operating System Topics : - Process & Thread
Process Concept • Computer do several things at the same time. i.e. multitasking, edit a document, listen music, playing a game etc. • For example Consider, web server. • Request come for retrieving details from web pages, when request comes in the web server check that page exist in web server. If it is available it start to fetch data.
Another example of PC, when the system booted, many processes are started automatically that user doesn’t know. • For example, one process waiting for receiving incoming email and same time another process running of antivirus, it also remain active to check whether any new definition available to update antivirus definition.
Process and Program • Command interpreter is a process. • Process is a dynamic entity, that is a program in execution. A process is sequence of instruction. Process exits in limited span of time. Two or more process could be executing in same program. Each using their own data and storage. • Program is static entity made up of program statement. Program contains the instruction. A program exists at single place in space and continues to exist. A program does not perform the action by itself. • We can find difference between a program,such as the contents of files stored on disk and a process,with a program counter specifying the next instruction to execute.
Main() { IntI,prod=1; For(i=0;i<100;i++) Prod=prod*I; } • It is a program containing one multiplication statement(prod=prod*i); but process will execute 100 multiplication,one at a time through the for loop.
Process State • Each process is an independent entity, with its own program counter and internal state, process often needs to interact with other process. • One process may generate some output and another process uses as input. • When process executes, it changes state. Process state is defined as the current activity of the process. • Process state contains five states. Each process is in one of the states. These states are listed below. • New • Ready • Running • Waiting • Terminated
Process State • New - The process is being created. • Running - Instructions are being executed. A running process possesses all the resources needed for its execution, including the processor. • Waiting - Waiting for some event to occur. Such as completion or getting I/O operation. • Ready - Waiting to be assigned to a processor. • Terminated - Process has finished execution
Process State • A process changes state as it executes. new admitted terminated exit interrupt running ready Scheduler dispatch I/O or event completion I/O or event wait waiting
Example • Explain whether following transition between process and states are possible or not, if possible give an example • 1) Running- Ready • 2) Running – Waiting • 3) Waiting- Running • 4) Running - Terminated
For changing the state from running to ready is possible. For example, when a process time expires. • Running- waiting is also possible. When a process issues an I/O request. • Waiting- running is not possible. From waiting state, the process goes to the ready state when running. • Running- termination is possible, when a process terminates itself.
Reasons for process termination: • Normal operation is completed by process. • Process has run longer than the specified total time length. • Process require more memory than the system can provide. • Process tries to access memory location that it is not allowed to access. • Protection error – Process attempts to use a resource that it is not allowed use. • Invalid instruction : - Process attempts to execute a nonexistent instruction. • Privileged instruction : - Process attempts to use an instruction reversed for operating system. • Data misuse: - a piece of data is of the wrong type or is not initialized.
Process Creation • There are four aims to create a new process • System initialization • Execution of a process creation system call by a running process. • A user request to create a new process • Initiation of a batch job.
Process • New batch job: - Operating system is provided with a batch job control stream usually on tape or disk. • Interactive log on: A user at a terminal logs on to the system. • Created by operating system to perform a function on behalf of a program, without user having to wait.
When an operating system is booted, typically several processes are created. Some of these are foreground processes, that is, processes that interact with (human) users and perform work for them. Others are background processes, which are not associated with particular users, but instead have some specific function. For example, one background process may be designed to accept incoming email, sleeping most of the day but suddenly springing to life when email arrives. • Another background process may be designed to accept incoming requests for Web pages hosted on that machine, waking up when a request arrives to service the request. • Processes that stay in the background to handle some activity • such as email, Web pages, news, printing, and so on are called daemons.
Process Control Block • Information about each and every process in our computer is stored into Process Control Block (PCB). It is also known as Task Control Block. PCB stores much information about process. • This entry contains information about process state, its program counter, stack pointer, memory allocation, the status of open files, its accounting and scheduling information.
Process State PROCESSES PROCESS CONTROL BLOCK: CONTAINS INFORMATION ASSOCIATED WITH EACH PROCESS: It's a data structure holding: • PC, CPU registers, • memory management information, • accounting ( time used, ID, ... ) • I/O status ( such as file resources ), • scheduling data ( relative priority, etc. ) • Process State (so running, suspended, etc. is simply a field in the PCB ).
Pointer : - Pointer points to another process control block. Pointer is used for maintaining schedule list. • Process state: - process state may be new, ready, running, waiting and so on. • Program counter: - it indicates the address of next instruction to be executed for this process. • Event information: - for a process in the blocked state this field contains information concerning the event for which the process is waiting. • CPU register : - it includes general purpose register, stack pointers, index registers and accumulators etc. Number of register and type of register totally depends upon the computer architecture. • Memory management information: - this information may include the value of base and limit register. This information is used for deallocating the memory when the process terminates.
Schedulers • In Multiprogramming system more than one process are in memory for execution. But there is only one CPU for execution so CPU has to switch from one process to another process. • When one process is under execution (in running state) another process has to wait for execution (in waiting or ready state). • If there is more than one process waiting for CPU then CPU has to select only one process for execution out of many processes waiting in memory. Which process CPU has to select for execution that is decided by process scheduling.
Schedulers 1.Scheduling Queue (Job Queue) When we start the execution of our program, it becomes process and process is stored into memory. All the process waiting for the execution is stored into queue. In general queue is known as Job Queue. Queue is generally stored as linked list. It contains pointer to PCB (Process control block) of first and last process waiting in queue and PCB of one process contains the pointer to another PCB and so on. • There are two category of job queue. • Ready Queue. • Device Queue.
Schedulers • Process moves between various queues from its start to end. Operating system has to select one process from queue for execution. This selection is done by Schedulers. • There are three types of scheduler. • Long-term schedulers • Short-term Schedulers • Middle-term Schedulers
Long-term scheduler(or job scheduler) – selects which processes should be brought into the ready queue. Long term scheduler determines which program are admitted to the system for processing. Job scheduler selects process from the queue and loads them into memory for execution. • Short-term scheduler(or CPU scheduler) – selects which process should be executed next and allocates CPU • Sometimes the only scheduler in a system.
The medium-term scheduler, removes processes from memory, and thus reduce the degree of multiprogramming. • After some time, the process can be reintroduced into memory and its execution can be continued where it left off. This scheme is called swapping. • Swapping is necessary to improve the process mix, or because change in memory requirements, requiring memory to be freed up
Context Switch • When CPU switch from one process to another process, CPU saves the information about the one process into PCB (Process Control Block) and then starts the new process. It is known as context switch. • Context switch is purely overhead (nkimi[ bi[ji[} because system does not perform any useful work while context switch. Its speed of the context switch different for different computer system. • Speed of the context switch depends on the memory size. As we have more memory than speed of context switch is increase. It also depends on the no. of registers in your CPU. If we have multiple set of registers then we have to not store the value of the register into PCB but we simply change the register for new process. Also if we have more complex operating system then speed of context switch is decrease.
Thread • When we start the execution of program it becomes process. In time sharing system more than one process can be created in memory at a time. • In latest operating system process is further divided into multiple parts and those parts are known as Thread. • A thread, sometimes called lightweight process(LWP), is basic unit of CPU utilization; it comprises a thread ID, a program counter, a register set, and a stack. • It shares with other threads belonging to the same process its code section, data section, and other operating-system resources, such as open files and signals. • A traditional ( heavyweight) process has a single thread of control. If the process has multiple threads of control, it can do more than one task at a time.
Many software that run on desktop PCs are multithreaded. An application typically is implemented as a separate process with several threads of control. • A web browser may have one thread display images or text while another thread retrieves data from the network. • Another example, a word processor may have a thread for displaying graphics, another thread for reading key stroke from the user, and third thread for performing spelling and grammar checking in the background.
Thread Benefits • Responsiveness : - multithreading an application may allow a program to continue running even if part of it is blocked or performing lengthy operation, thereby increasing responsiveness to the user. For instance, a multithreaded web browser can still allow user interaction in one thread while an image is being loaded in another thread. • Resources sharing : - By default, threads share the memory and the resources of the process to which they belong. The benefit of code sharing is that it allows an application to have several different threads of activity all within the same address space.
3. Economy : - Allocation memory and resources for process creation is very costly. But thread share resources of the process which they belong. It is more economical to create and context switch threads. To maintain a process is very time consuming compare to thread. For example, Solaris 2, creating a process is about 30 times slower than is creating a thread and context switching is about five times slower. 4. Utilization of multiprocessor architectures: - The benefits of multithreading can be greatly increased in a multiprocessor architecture, where each thread may be running in parallel on a different processor. A single-threaded process can only run on one CPU, no matter how many are available.
User Thread • User thread are supported above the kernel and are implemented by a thread library at user level. • The library provides support for creation, scheduling, and management with no support from the kernel because the kernel is unaware about user level thread. • All thread creation and scheduling are done in user space without the need for kernel interference. • User level thread are generally fast to create and manage.
User Thread(continued) • Drawback of user thread. For example, if the kernel is single-threaded, then any user level thread performing a blocking system call will cause the entire process to block, even if other threads are available to run within the application.
Kernel Thread • Kernel thread are supported directly by the operating system. • The kernel performs thread creation, scheduling, and management in the kernel space. Because thread management is done by the operating system. • Kernel threads are slower to create and manage compare to user level threads. • Since the kernel is managing the threads, if a thread performs a blocking system call that time the kernel can schedule another thread in the application for execution.
Multithreading Models • Many systems provide support for both user and kernel threads, resulting in different multithreading models. • there are three common types of threading implementation. • Many-to-One Model • One-to-One Model • Many-to-Many Model
The many-to-one model maps many user-level threads to one kernel thread. • Thread management is done in user space, so it is efficient, but the entire process will block if a thread makes a blocking system call. • Only one thread can access the kernel at time, multiple threads are unable to run in parallel on multiprocessors.
The one-to-one model maps each user thread to a kernel thread. • It provides more flexibility than the many-to-one model by allowing another thread to run when a thread makes a blocking system call. • It also allows multiple thread to run in parallel on multiprocessors. • The only disadvantage of this model is that creating a user thread requires creating the corresponding kernel thread. Because the overhead of creating kernel threads can burden the performance of an application.
The many-to-many model multiplexes many user-level threads to a smaller or equal number of kernel threads. • The number of kernel threads may be specific to either a particular application or a particular machine. • Developers can create as many user thread as necessary, and the corresponding kernel threads can run in parallel on a multiprocessor. • Also, when a thread performs a blocking system call, the kernel can schedule another thread for execution.
Threading Issues • The fork and exec system calls Fork : - Exec: - • In a multithreaded program, fork and exec system calls is changed UNIX system have two version of fork system calls. • One call duplicates only the thread that invoke the fork system call. Whether to use one or two version of fork system call totally depends upon the application .Duplication all threads are unnecessary, if exec is called immediately.
(2) Cancellation • Thread cancellation is the task of terminating a thread before it has completed. • For example, when a user presses a button on a web browser that stops a web page from loading. Sometimes a web page is loaded in a separate thread. • When a user presses stop button, the thread loading the page is cancelled. • A thread that is to be cancelled is often referred to as target thread. Cancellation of a target may occur in two different scenarios
Asynchronous cancellation : - one thread immediately terminates the target thread. • Deferred cancellation : - the target thread can periodically check if it should terminate, allowing the target thread an opportunity to terminate itself in an orderly fashion. • The difficulty with cancellation occurs where resources have been allocated to threads or if a thread was cancel while in the middle of updating data. This become trouble with asynchronous cancellation. • The OS will often reclaim system resources from a cancelled thread,. • Therefore, cancelling a thread asynchronously may not free a necessary resources.
(3) Signal Handling • A signal may be received either synchronously or asynchronously, depending upon the source and the reason for the event being signalled. • Whether a signal is synchronous or asynchronous, all signals follow the same pattern: • A signal is generated by the occurrence of a particular event. • A generated signal is delivered to a process. • Once delivered, the signal must be handled.