1 / 61

Today

Today. Welcome Reading: Chapter 2, 6 MOS, Lottery Scheduling paper P1: Process Scheduling, Lottery Scheduling Break: P2: Project #1, Deadlock (if time) End early at ~ 3pm, have to run Away next week – Andy will hold extra office hours. CSci 5103 Operating Systems.

clovis
Download Presentation

Today

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Today Welcome Reading: Chapter 2, 6 MOS, Lottery Scheduling paper P1: Process Scheduling, Lottery Scheduling Break: P2: Project #1, Deadlock (if time) End early at ~ 3pm, have to run Away next week – Andy will hold extra office hours

  2. CSci 5103Operating Systems Processes Scheduling

  3. Process (CPU) Scheduling • We’ve talked a little bit about thread scheduling • Similar concept • Selects from among the processes in memory that are ready-to-run and allocates the CPU to one of them. • Scheduling can be nonpreemptive or preemptive • Process alternates between CPU bursts (executing) and I/O bursts (blocked) • Q: suppose process is multi-threaded, how is scheduling done then?

  4. Process (CPU) Scheduling • Q: suppose process is multi-threaded, how is scheduling done then? • Pick process to run • Pick threads to run within it • If kernel threads, OS should pick only those associated with that process • If user threads, user library picks

  5. Different Scheduling Criteria • CPU utilization – keep the CPU as busy as possible • Throughput –# of processes that complete their execution per time unit - maximize this • Waiting time – total amount of time a process has been waiting in the ready queue - minimize this • Turnaround time – amount of time to execute a particular process (waiting + actual exec time) – minimize this • Response time – amount of time it takes from when a request was submitted until the first response is produced, not final output - minimize this

  6. P1 P2 P3 0 24 27 30 First-Come, First-Served (FCFS) Scheduling • Example: ProcessBurst Time • P1 24 • P2 3 • P3 3 • Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: • Waiting time: • P1= 0; P2 = 24; P3 = 27 • Average waiting time: • (0 + 24 + 27)/3 = 17

  7. P2 P3 P1 0 3 6 30 FCFS Scheduling (Cont.) • Suppose that the processes arrive in the order • P2 , P3 , P1 . • The Gantt chart for the schedule is: • Waiting time: • P1 = 6;P2 = 0; P3 = 3 • Average waiting time: • (6 + 0 + 3)/3 = 3 • Much better than previous case. • Avoids Convoy effect short process behind long process • What about avg turnaround time? Relationship?

  8. Shortest-Job-First (SJR) Scheduling • Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time. • Two schemes: • nonpreemptive – once CPU given to the process it cannot be preempted until completes its CPU burst. • Preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF). • SJF is optimal – gives minimum average waiting time for a given set of processes. • Sounds great – problems? • fairness/starvation; know SRTF!

  9. P1 P3 P2 P4 0 3 7 8 12 16 Example of Non-Preemptive SJF • Process Arrival TimeBurst Time • P1 0.0 7 • P2 2.0 4 • P3 4.0 1 • P4 5.0 4 • SJF (non-preemptive) • Average waiting time: • (0 + 3 + 6 + 7)/4 = 4

  10. P1 P2 P3 P2 P4 P1 11 16 0 2 4 5 7 Example of Preemptive SJF • Process Arrival TimeBurst Time • P1 0.0 7 • P2 2.0 4 • P3 4.0 1 • P4 5.0 4 • Average waiting time: • (9 + 1 + 0 +2)/4 = 3 • Looks great – any problems? • context switches, greater risk of starvation

  11. Determining Length of Next CPU Burst • Can only estimate the length. • Can be done by using the length of previous CPU bursts, using exponential averaging. Recent past more heavily weighted

  12. Accuracy? For long-running jobs, past is a pretty good predictor - running for T, likely to run for another T Will work great for: for file length get next file record compute on the record end-for Responsive – think time applications?

  13. Priority Scheduling • A priority number (integer) is associated with each process • The CPU is allocated to the process with the highest priority (smallest integer  highest priority). • SJF is a priority scheduling where priority is the predicted next CPU burst time. • Problem with priorities? • Starvation – low priority processes may never execute. • priority inversion • One Solution  Aging – as time progresses increase the priority of the process.

  14. Round Robin (RR) • Each process gets a small unit of CPU time (time quantum), e.g. 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. • If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. • Performance • q large  ~ scheduling algorithm? • FCFS • q small  • more context switches -> q must be large with respect to context switch cost, otherwise overhead is too high.

  15. P1 P2 P3 P4 P1 P3 P4 P1 P3 P3 Example: RR with Time Quantum = 20 • ProcessBurst Time • P1 53 • P2 17 • P3 68 • P4 24 • The Gantt chart is: • Compare to SJF: avg turnaround time and response • Typically, higher average turnaround than SJF, but better response. • 113 vs. 79 (not incl. context-switching overhead) 0 20 37 57 77 97 117 121 134 154 162

  16. How a Smaller Time Quantum Increases Context Switches

  17. Turnaround Time Varies With The Time Quantum ^ quanta: response?

  18. Policy versus Mechanism • Separate whatfrom how • priority scheduling mechanism, how to set priorities => policy • RR mechanism, how to set quanta => policy Other Examples? • upcall/need a processor => • preempt virtual process held by a process • which process, which virtual processor

  19. Multilevel Queue Scheduling • Ready queue is partitioned into separate queues, e.g.:foreground (interactive) – I/O boundbackground (batch) – CPU-bound • Each queue has its own scheduling algorithm, e.g.foreground – RR – why?background – FCFS • Scheduling must be done between the queues. • Fixed priority scheduling; i.e., serve all from foreground then from background. Problem? • starvation. • Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; e.g.,80% to foreground jobs in RR 20% to background jobs in FCFS

  20. Multilevel Queue Scheduling

  21. Multilevel Feedback Queue • A process can move between the various queues; aging can be implemented this way. Feedback -> move bet. queues • Multilevel-feedback-queue scheduler defined by the following parameters: • number of queues • scheduling algorithms for each queue • method used to determine when to upgrade a process • method used to determine when to demote a process • method used to determine which queue a process will enter when that process begins

  22. Example of Multilevel Feedback Queue • Three queues: • Q0 – time quantum 8 milliseconds • Q1 – time quantum 16 milliseconds • Q2 – FCFS • Priority rule: • execute all in Q0, then Q1 , then Q2 preempt if necessary • enter in Q0 • Scheduling: Job requires 24 time milliseconds, but not known to OS • A new job enters queue Q0. Why? When it gains the CPU, it receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1. • At Q1 job is again served and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2.

  23. MLFQ What is the goal of MLFQ in this example? Good response for short interactive jobs Fairness for long-running jobs (at least w/r to each other)

  24. Lottery Paper

  25. Scheduling Issues • Context • multiple scarce resources: CPU, I/O bandwidth, memory, locks • concurrently executing tasks • resource requests of varying importance and characteristics • Quality of Service • time-scales for responsiveness differ (think about an editor, browser, mpeg player, or simulation task …) • until now, what is the most responsive scheduling policy? – why doesn’t achieve this objective?

  26. Conventional Scheduling • Priority Scheduling • absolute control (but crude) • does p=1 vs. p=2 mean p=1 always gets the CPU or simply 2/3? • Problems with priorities • often ad hoc • unable to control service rates • possibility of starvation, inversion, …

  27. Solution: Lottery Scheduling • Type of proportional share scheduling • resource consumption rate proportional to share allocated • coupled with notion of scheduling quanta • Flexible control over service rates • current schedulers are rigid • example? • Modular Abstraction • multiple resource management policies enabled by tickets + lottery mechanism

  28. Lottery Scheduling Basics • Randomized mechanism with statistical guarantees • Lottery tickets • encapsulate resource rights • issued in different amounts • Lotteries • randomly select winning ticket • grant resource to client holding winning ticket

  29. Example Lottery 5 tasks or processes

  30. Lottery Scheduling Advantages • Probabilistic guarantees • n lotteries, client holds t tickets, T total tickets • Probability of winning a lottery? • p = t/T • amount of resource after n lotteries? • E[wins] = np • response time inversely proportional to ticket allocation expected # of lotteries n that client must wait to win: • E[n] = 1/p = T/t • How often are lotteries held?

  31. Lottery Scheduling Advantages • Proportional-share fairness • direct control over service rates • Supports dynamic environments • immediately adapts to changes • fair chance to win each allocation • No starvation • hold a non-zero # of tickets

  32. Flexible Resource Management • Ticket transfers • explicit transfer between clients • when is this useful? • Blocking, prevent priority inversion • Ticket inflation/deflation • client creates/removes tickets • violates modularity and load insulation • convenient among mutually trusting clients: no communication is needed (unlike ticket transfer)

  33. Flexible Resource Management • Compensation tickets • if don’t use full Q, your tickets are inflated in value … to ensure rate of consumption guaranteed by the share

  34. Ticket Currencies • Tickets denominated in currencies • Modular resource management • locally contain effects of inflation • isolates loads across logical trust boundaries

  35. Currency Implementation How suppose task2 is blocked … value of task1 funding now

  36. Kernel Implementation • Objects: ticket, currency • Operations • create/destroy ticket, currency • fund/unfund currency • compute value of ticket

  37. Relative Rates

  38. Fairness Over Time Looks a bit bumpy …

  39. Query Processing Rates Server has no tickets, clients transfer their tickets to it for a query

  40. Currencies Insulate Loads local inflation of B2 does not impact A

  41. Impressions?

  42. Impressions? • +: isolation between service classes • useful to control service rates for tasks within an application • ticket transfer • -: how would tickets be allocated across different applications? • this is about mechanism – what about policy?

  43. BREAK

  44. Project #1 Simulate a simple CPU Build kernel support for managing such processes - processes will be for a very simple “machine” - processes can make system calls (I/O, exit, create a process) All about process mgmt – create, terminate, schedule, bookkeep processes

  45. Project #1 Machine has two registers, PC (program counter), and VC (value register), no memory operations S x : store x to VC A x : add x to VC D x : decr x from VC C <priority> <file-name> : create a process with the given priority using file-name (a code file) E : terminate the current process I <dev-class> (B/C) : IO syscallreq to given device class P : privileged instruction (exception if mode not supervisor) – just a no-op

  46. Project #1 Kernel: Maintain list of processes and their status Per process accounting I/O device status Process “machine”state (PC, VC) – store in kernel or in process VAS (if you want) Scheduling: RR, Lottery scheduling Boot time: set up h/w run initial user program loop until no more processes to schedule

  47. Project #1 OS kernel I/O devices CPU TOP OS->CPU: run user program; run driver code (some P instrs) CPU->OS: P instr (if exception), C, E, I syscalls Device->OS: I/O completes via interrupt/signal OS->Device: schedule future I/O Goal: OS to create, context-switch, terminate, run user processes and deal with the hardware

  48. Project #1 CPU: Mode bit PC: fetch instruction at this address, execute, …. VC: value register to be updated

  49. Project #1 Process: Kernel state VAS: text segment local PC, local value (~ VC) local PC starts at 0

  50. Project #1 I instruction: yes, it is kludgey…. Two parts: “execute” a fixed # of P instrs ( ~ driver) do normal kernel bookkeeping: set up I/O tables I/O table – process P is waiting on a I/O completion event at future time T

More Related