300 likes | 317 Views
W4118 Operating Systems. Instructor: Junfeng Yang. Logistics. Homework 4 out, due 3:09pm 3/26 You will add a new scheduling policy to Linux This is likely the most difficult programming assignment of this course, so start today. Last lecture: Advanced Scheduling.
E N D
W4118 Operating Systems Instructor: Junfeng Yang
Logistics • Homework 4 out, due 3:09pm 3/26 • You will add a new scheduling policy to Linux • This is likely the most difficult programming assignment of this course, so start today
Last lecture: Advanced Scheduling • Advanced Scheduling Concepts • Multilevel Queue Scheduling • Multiprocessor Scheduling • Real-time Scheduling • Linux Scheduling • Goals • Data structures: runqueue, task_struct • Procedures: schedule(), scheduler_tick()
Today: Memory Management • Dynamic memory allocation • Stack and Heap • Intro to Memory management
Motivation for Dynamic Memory Allocation • Static (compile time) allocation is not possible for all data • Recursive calls • Runtime input from user • Complicated data structure • … • Two ways of dynamic allocation • Stack allocation • Restricted, but simple and efficient • Heap allocation • More general, but less efficient • More difficult to implement
Stack Organization • Stack: memory is freed in opposite order from allocation • Last in First out (LIFO) • When useful? • Memory usage pattern follows LIFO • Example • Function call frames • Implementation • Pointer separating allocated and free space • Allocate: increment pointer • Free: decrement pointer • Advantage • Simple and efficient • Keeps all free space continuous • Disadvantage • Not for data structures
Heap Organization • Heap: allocate from random locations • Memory consists of allocated area and free area (or holes) • When is it useful? • Allocate and free are unpredictable • Complex data structures • new in C++, malloc in C, kmalloc in Linux kernel • Advantage: general, works on arbitrary allocation and free patterns • Disadvantage: end up with small chunks of free space
Fragmentation • Small trunks of free memory, too small for future allocations • External: visible to system • Internal: visible to process (e.g. if allocate at some granularity) • Goal • Reduce number of holes • Keep holes large • Stack: all free space is together as one big hole
Heap Implementation • Data structure: linked list of free blocks • free list: chains free blocks together • Allocation • Choose block large enough for request • Update free list • Free • Add block back to list • Merge adjacent free blocks
Best vs. First vs. Worst • Best fit • Search the whole list on each allocation • Choose the smallest block that can satisfy request • Can stop search if exact match found • First fit • Choose first block that can satisfy request • Worst fit • Chose largest block (most leftover space) Which is better?
Examples • Best algorithm: depends on sequence of requests • Example: free list has 2 blocks of size 20 and 15 bytes • Allocation requests: 10 then 20 • Allocation requests: 8, 12, then 12
Comparison of Allocation Strategies • Best fit • Tends to leave very large holes and very small holes • Disadvantage: very small holes may be useless • First fit: • Tends to leave “average” size holes • Advantage: faster than best fit • Worst fit: • Simulation shows that worst fit is worse in terms of storage utilization
Today: Memory Management • Dynamic memory allocation • Stack • Heap • Allocation strategies • Intro to Memory management
Motivation for Memory Management OS User Process • Simple uniprogramming with a single segment per process • Early batch systems • Early personal computers • Disadvantages • Only one process can run a time • Process can destroy OS
Multiple Processes in Memory Process A Process B
Multiprogramming Wish-list • Sharing • multiple processes coexist in main memory • Transparency • Processes not aware that memory is shared • Run regardless of number and/or locations of processes • Protection • Cannot corrupt OS or other processes • Privacy: cannot read data of other processes • Efficiency: should have reasonable performance • Purpose of sharing is to increase efficiency • Do not waste CPU or memory resources
Relocation Transparency relocation • Process can run anywhere in memory (can’t predict in advance) • How?
Background • Compiler compiles source files into object files • Linker links object files and system libraries into a program • Loader loads program and dynamically-linked library (from disk) into memory and placed within a process for it to be run
Dynamic Linking • Linking postponed until execution time • Implementation • Small piece of code, stub, used to locate the appropriate memory-resident library routine • Stub replaces itself with the address of the routine, and executes the routine • Advantage • useful for libraries: updating the library updates all programs that use it (shared libraries) • Saves space • Disadvantage • Difficult to manage dependencies • Runtime failures
Dynamic Loading • Routine is not loaded until it is called • Better memory-space utilization; unused routine is never loaded • Useful when large amounts of code are needed to handle infrequently occurring cases
When to Relocate? • Compile time • Hardwire physical location at compile time (absolute code) • Problem • Each program needs to be written with others in mind • Not really transparent • Load time • Compiled code relocatable (relocatable code) • How? All addresses relative to a start address. Change start address to relocate • Problem • Once loaded, can’t change or move • Execution time • Can move during execution • This is what’s generally done, but need hardware support
Relocation at Execution Time MEMORY Logical Addresses CPU MMU Physical Addresses • Map program-generated address to hardware address dynamically at every reference • MMU: Memory Management Unit • Controlled by OS • Program: logical (virtual) address • Hardware: physical (real) addresses • Address space: each process’s view of memory
Address Spaces OS AS1 AS2 AS3 Logical view Physical view
Hardware Support • Two operating modes • Privileged (protected, kernel) mode: when OS runs • When trap into OS (system calls, interrupts, exceptions) • Allows certain instructions to be executed • Allows OS to access all of physical memory • User mode: when user processes run • Performs translation of logical address to physical address • Protects OS and other processes physical memory • How to implement protection? • Base register: start physical location of address space • Limit register: last valid address the process may access • Appears to have private memory of size equal to limit register
Implementation limit base Logical Addresses <= limit? CPU + no Physical Addresses exception • Translation on every memory access • Compare logical address to limit register • If greater, generate exception • Add base register to logical address to generate physical address
Managing Processes with Base and Limit • Context switch • Protection requirement • User process cannot change base and limit register • User process cannot change to privileged mode
Pros and Cons of Base and Limit • Continuous allocation: each process is in a contiguous memory block • Advantages • Supports dynamic relocation of address space • Supports protection across multiple spaces • Cheap: few registers and little logic • Fast: add and compare can be done in parallel • Disadvantages • Each process must be allocated contiguously in real memory • Fragmentation: cannot allocate a new process • Solution: swapping (next) • Must allocate memory that may not be used • No sharing: cannot share limited parts of address space • e.g. cannot shared code with private data
Swapping • A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution • Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images • Roll out, roll in – swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed • Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped • Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows)