230 likes | 465 Views
UNIX. Scheduling. Traditional UNIX Scheduling. Scheduling algorithm objectives Provide good response time for interactive users Ensure that low-priority background jobs do not starve Scheduling algorithm implementation Multilevel feedback using round robin within each of the priority queues
E N D
UNIX Scheduling
Traditional UNIX Scheduling • Scheduling algorithm objectives • Provide good response time for interactive users • Ensure that low-priority background jobs do not starve • Scheduling algorithm implementation • Multilevel feedback using round robin within each of the priority queues • 1 second preemption • Priority based on process type and execution history • Priorities are recomputed once per second • Base priority divides all processes into fixed bands of priority levels • Bands used to optimize access to block devices (e.g., disk) and to allow the OS to respond quickly to system calls
CPU j ( i – 1) 2 CPU j ( i – 1) P j (i) = Base j + 2 Traditional UNIX Scheduling (cont.) CPU j (i) = Measure of processor utilization by process j through interval i P j ( i ) = Priority of process j at beginning of interval i ; lower values equal higher priorities Base j = Base priority of process j nice j = User controllable adjustment factor CPU j (i) = + nice j
Traditional UNIX Scheduling (cont.) • Bands in decreasing order of priority • Swapper • Block I/O device control • File manipulation • Character I/O device control • User processes • Goals • Provide the most efficient use of I/O devices • Within the user process band, use the execution history to penalize processor-bound processes at the expense of I/O bound processes • Example of process scheduling • Processes A, B, and C are created at the same time with base priorities of 60 • Clock interrupts the system 60 times a second and increments counter for running process
Linux Scheduling • Enhances the traditional UNIX scheduling by adding two new scheduling classes for real-time processing • Scheduling classes • SCHED_FIFO: First-in-first-out real-time threads • SCHED_RR: Round-robin real-time threads • SCHED_OTHER: Other, non-real-time threads
Linux Scheduling (cont.) • Scheduling for the FIFO threads is done according with the following rules • An executing FIFO thread can be interrupted only when • Another FIFO thread of higher priority becomes ready • The executing FIFO thread becomes blocked, waiting for an event • The executing FIFO thread voluntarily gives up the processor (sched_yield) • When an executing FIFO thread is interrupted, it is placed in a queue associated with its priority • When a FIFO thread becomes ready and it has a higher priority than the currently running thread, the running thread is pre-empted and the thread with the higher priority is executed. If several threads have the same, higher priority, the one that has been waiting the longest is assigned the processor
Linux Scheduling (cont.) • Scheduling for the Round-Robin threads is similar, except for a time quota associated with each thread • At the end of the time quota, the thread is suspended and a thread of equal or higher priority is assigned the processor
UNIX Memory Management
Logical vs. Physical Address Space • The concept of a logical address space that is bound to a separate physicaladdress space is central to proper memory management. • Logical address – generated by the CPU; also referred to as virtual address. • Physical address – address seen by the memory unit.
Paging • Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter is available. • Divide physical memory into fixed-sized blocks called frames (size is power of 2, between 512 bytes and 8192 bytes). • Divide logical memory into blocks of same size called pages. • Keep track of all free frames. • To run a program of size n pages, need to find n free frames and load program. • Set up a page table to translate logical to physical addresses. • Internal fragmentation.
Address Translation Scheme • Address generated by CPU is divided into: • Page number(p) – used as an index into a pagetable which contains base address of each page in physical memory. • Page offset(d) – combined with base address to define the physical memory address that is sent to the memory unit.
Implementation of Page Table • Page table is kept in main memory. • Page-tablebase register (PTBR) points to the page table. • Page-table length register (PRLR) indicates size of the page table. • In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction. • The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)
Region • Virtual space of a process • Stack , Data , Text – are regions of every process
Per Process Region Table - pregion • For every region – • Starting virtual address • Number of pages in this region • Access mode • Pointer to the Page Table (page table for every region)
Kernel process for Virtual Memory • Pager • Every time the OS needs to swap a virtual page to the physical memory • Get the first frame from the list of free pages , if the Modify bit is set write it’s data to the disk. • Load the page to this frame • Set the valid bit • Resume the waiting process • Page Stealer (page daemon) • Every period of time • Enter old frames to the list of free pages • Update the age of every valid frame if the reference bit is 0 • Put 0 in the the reference bit • => LRU with second chance
Page Fault Page Frames Page Table Entries Disk Block • Some of the possibilities : • The page is in the swap file • The page is in the list of free pages After Swapping Page into Memory