230 likes | 247 Views
Summary :- Distributed Process Scheduling. Prepared BY:- JAYA KALIDINDI. Summary of chapter 5. A System performance model Static process scheduling Dynamic load sharing and balancing Distributed process implementation Real time scheduling. A system performance model[2].
E N D
Summary :- Distributed Process Scheduling Prepared BY:- JAYA KALIDINDI
Summary of chapter 5 • A System performance model • Static process scheduling • Dynamic load sharing and balancing • Distributed process implementation • Real time scheduling
A system performance model[2] • Depicts the relationship among algorithm, scheduling and architecture to describe the Inter process communication • Basically three types of model are there: • Precedence process model(DAG) Directed edges represent the precedence relationship
Communication process model • In this model processes co-exist and Communicate synchronously. • Edges in this model represent the need of communication between the processes
Disjoint process model: • In this processes run independently and completed in finite time. • Processes are mapped to the processors to maximize the utilization of processes and minimize the turnaround time of the processes.
Speed up N- Number of processes - Efficiency Loss when implemented on a real machine. RC-relative concurrency RP-relative processing requirement Speed up depends on: Design and efficiency of the scheduling algorithm. Architecture of the system
Static Process Scheduling • Is used to find optimal solution to the problem. • There are two extreme cases of work assignment. • mapping of processes is done before execution of the processes. once process started it stays at the processor until completion of the process. And never preempted. • Decentralized and non –Adaptive are the drawbacks of Static process scheduling.
Dynamic load sharing and Balancing • Load balancing can be defined as a technique to distribute work between many computers processes or any other resources to get optimal resource utilization. • controller reduces the process idling through load sharing ie,by joining the shortest queue and equalizing queue sizes by load balancing. • Further, processes can be allowed to move from longer queue to shorter queue through load Redistribution.
Sender Initiated Algorithm • It is activated by a sender process that wishes to off-load some of its computation by migration of processes from a heavily loaded sender to a lightly loaded receiver. • Transfer of process form a sender to reciever requires three basic decisions. • Transfer policy:-when does the node become the sender? • Selection Policy:-How does the sender choose a process for transfer? • Location Policy:-which node should be the target reciever?
Receiver initiated Algorithm:- • This are the pull models in which receiver can pull a process from others to its site for execution. • They are more stable than the sender initiated algorithm. • At high system load ,process migrations are few and a sender can be found easily. • Receiver initiated algorithms perform better than the sender initiated algorithms • Both the algorithms can be combined depending on RT and ST.
Distributed process implementation • Remote Service:-The message is interpreted as a request for a known service at the remote site • Three different software levels:- • As remote procedure calls at the language level. • As remote commands at the operating system level. • As interpretive messages at the application level.
Distributed process Implementation Depending on how the request messages are interpreted, there are three main application scenarios: • Remote Service • The message is interpreted as a request for a known service at the remote site. • Remote Execution • The messages contain a program to be executed at the remote site. • Process Migration • The messages represent a process being migrated to a remote site for continuing the execution.
Remote Execution • The purpose of remote service is to access the remote host unlike remote service remote process maintains the view of originating system. • Some Implementation issues:- • load sharing algorithms. • Location independence. • System heterogeneity. • Protection and security.
Load-Sharing Algorithm • Each process server are responsible to maintain the load information. • The list of hosts participating are broadcasted. • The selection procedure is by a centralized broker process. • Once a remote host is selected- • The client process server indicates the resource requirements to the process server at the remote site. • If the client is authenticated and its resource requirements can be met, the server grants permission for remote execution. • The transfer of code image follows, and the server creates the remote process and the stub. • The client initializes the process forked at the remote site.
Location Independence • Process created by remote execution requires coordination to accomplish common task. • So it is necessary to support logical views for the processes. • Each remote process is represented by an agent process at the originating host. • It appears as though the process is running on a single machine.
System heterogeneity • If remote execution is invoked on heterogeneous host , then it is necessary to re-compile the program. • Overhead Issue. • Solution: • Use canonical machine-independent intermediate language for program execution.
Process Migration • The message represents a process being migrated to the remote site for continuing execution. • Process migration facility • State and context transfer:-It transfers the computation state information and communication state information
Real Time Scheduling:- • The systems which insures that certain actions are taken within specified time constraints are called real time systems. Can be classified as: Static vs dynamic Premptive vs non-premptive Global vs Local
Rate Monotonic • It’s easy to implement. • Sorts the tasks by the lengths of their periods. • It also makes very good priority assignments. • Rate monotonic is an optimal priority assignment algorithm.
Deadline Monotonic:-In real time system some tasks need to complete execution a short time after being requested. Earliest Deadline First:-this is applies dynamic priority scheduling to achieve better CPU utilization . Real time Synchronization:-A set of tasks that cooperate to achieve a goal will need to share information and resources or other words synchronize with other tasks.
References:- [1].http://en.wikipedia.org/wiki/ [2]. Randy Chow, Theodore Johnson, “Distributed Operating Systems & Algorithms”, Addison Wesley.(all diagrams) [3].Dejan S. Milojicic Fred DouglisYves Paindaveine Richard Wheeler Songnian Zhou, “Process Migration” , ACM Computing Surveys (CSUR) Volume 32 , Issue 3 (September 2000) [4]. S. Cheng, J.A. Stankovic and K. Ramamritham, ‘‘Scheduling Algorithms for Hard Real-Time Systems: A Brief Survey’’, page6-7 in Hard Real-Time Systems: Tutorial, IEEE (1988). [5] .Distributed Process Scheduling. Advanced Operating Systems Louisiana State University Rajgopal Kannan. Issues in Distributed Scheduling.www.csc.lsu.edu/
THANK YOU THANK YOU