170 likes | 287 Views
Operating System Concepts and Techniques Lecture 6. Scheduling-2* M. Naghibzadeh Reference M. Naghibzadeh, Operating System Concepts and Techniques, First ed., iUniverse Inc., 2011. To order: www.iUniverse.com , www.barnesandnoble.com , or www.amazon.com
E N D
Operating System Concepts and Techniques Lecture 6 Scheduling-2* M. Naghibzadeh Reference M. Naghibzadeh, Operating System Concepts and Techniques, First ed., iUniverse Inc., 2011. To order: www.iUniverse.com, www.barnesandnoble.com, or www.amazon.com * if your time table does not allow to cover all lectures, you can skip this lecture.
User 1 User 2 Processor User n Analytical approach to scheduling • Modelling is one scientific approach to analytical investigations • The following is a simple model of a uniprocessor multiuser computing system • It is based on queuing methodology
Interarrival distribution • Arrival and service patterns are often assumed to obey Poisson distribution • It can be shown that if the following two conditions satisfy the interarrival distribution is Poisson distribution • Arrival of new requests is independent of the history of the system and the current status of the queue • We can always define a time interval dt so small that the probability of more than one arrival within any period (t, t+dt) is negligible. The probability of one arrival in such an interval is equal to dt. Here, is a constant which is called arrival rate
Interarrival distribution… To show, let P0(t) represent the probability that there is no arrival within the interval (0, t), then P0(t+dt) = Pr [ N(0, t+dt ) = 0] = Pr [ N(0,t) = 0 AND N(t, t+dt) = 0]. From Assumption 1, Pr [ N(0,t) = 0 AND N(t, t+dt) = 0] = Pr [ N(0,t)=0] Pr [ N(t, t+dt) = 0] = P0(t) (1- dt). Therefore, P0(t+dt) = P0(t) (1- dt) Or (6.3) The left side of equation (6.3) is the definition of the derivative when dt approaches zero, thus:
Interarrival distribution… Or, from differential equations (6.4) But P0(0) = 1 , replacing t by zero in (6.4), we get: Which leads to c = 0, hence Or Or F(t) = 1 – P0(t) = 1 – probability density function (pdf) of the above distribution is: t >0, > 0(6.6)
Interarrival distribution… The expected value is Hence, inter-arrival time is exponential and we expect to receive a new arrival every time Following a similar discussion shows the system completes serving a request every time This argument shows there are highly analytical arguments concerning many aspects of operating system
Multiprocessor scheduling • Processor types • Symmetric Multi-Processor (SMP); homogeneous system • asymmetric multiprocessor; heterogeneous system • Processor Affinity • hard affinity • soft affinity • Synchronization Frequency • Independent parallelism • Coarse-grain parallelism • Fine-grain parallelism • Assignment • static • dynamic
Multiprocessor schedulers • First-Come-First-Served • Shortest Job Next • Shortest Remaining Time Next • Fair-Share Scheduling • Round Robin • Gang Scheduling, i.e., coscheduling
SMP Process Scheduling in Linux • For Symmetric Multi-Processor (SMP) • Usually assign the processor which was used last time • There are preemptable and non-preemptable processes • Preempt if higher priority and hardware cache rewrite time is less than time quantum • Respect processor affinty • Uses SCHED_FIFO, SCHED_RR, SCHED_OTHER
Real-time scheduling • hard real-time system • Do it in time or catastrophe • soft real-time system • No catastrophe but inaccuracy • Request period • Periodic • Apriodic • Spradic • Most common hard real-times are periodic
Rate Monotonic (RM) • Periodic tasks • Static priority • A task with higher request rate, i.e., a shorter request interval, is assigned a higher priority • Safety verification • Safe if Ulub = n (21/n –1). • Where U= • Optimal static priority for single processors
Earliest Deadline First (EDF) • Periodic tasks • Dynamic priority • Works like this: • If the system has just started, picks the request with the closest deadline • The execution of a request is completed, a request with the closest deadline from ready queue is picked • If the processor is running a process and a new request with a closer deadline arrives, Process switching takes place • Optimal dynamic priority for single processor
Least Laxity First (LLF) • Periodic tasks • Dynamic priority • Works like this: • The laxity of a request at any given moment is the time span that it can tolerate before which time it has to be picked up for execution, i.e., L = D – T - (E-C) • Always run a task with least laxity • Disadvantage: for two processes with the equal and least, the processor has to continuously switch between these two processes, Unpractical • Optimal dynamic priority
Summary • A scheduling strategy is usually designed to attain a defined objective, although multi-objective strategies are also possible • Average turnaround time (ATT) may be used to estimate the expected time length in which a request is completed after being submitted to the system. This could be a good measure of performance • Based on ATT different scheduling algorithms were investigated • Besides, in this chapter, I/O scheduling was studied and different schedulers such as FIFO, LIFO, SSTF, Scan, and C-Scan were introduced
Find out • In a single-processor multi-programming How average response time is computed • How many symmetric processors your laptop supports • Reasons behind processor affinity • Sample fine-grain parallelism applications • Reasons behind gang scheduling • Actual hard and soft real-time applications • Disadvantages of earliest deadline first scheduling strategy • Systems which are not safe under RM but are safe under relative urgency (RU)