350 likes | 503 Views
Operating System Concepts. Ku-Yaw Chang canseco@mail.dyu.edu.tw Assistant Professor, Department of Computer Science and Information Engineering Da-Yeh University. Chapter 9 Memory Management. Keep several processes in memory to increase CPU utilization Memory management algorithms
E N D
Operating System Concepts Ku-Yaw Chang canseco@mail.dyu.edu.tw Assistant Professor, Department of Computer Science and Information Engineering Da-Yeh University
Chapter 9 Memory Management • Keep several processes in memory to increase CPU utilization • Memory management algorithms • Require hardware support • Common strategies • Paging • Segmentation Chapter 9 Memory Management
Background Swapping Contiguous Memory Allocation Paging Segmentation Segmentation with Paging Summary Exercises Chapter 9 Memory Management Chapter 9 Memory Management
9.1 Background • Program must be brought (loaded) into memory and placed within a process for it to be run. • Address binding • A mapping from one address space to another • A typical instruction-execution cycle • Fetch an instruction from memory • Decode the instruction • May cause operands to be fetched from memory • Execute the instruction • Store results back into memory Chapter 9 Memory Management
9.1.1 Address Binding • Input queue • A collection of processes on the disk that are waiting to be brought into memory to run the program. • A user program will go through several steps before being executed • Addresses in source program are symbolic • A compiler binds these symbolic addresses to relocatable addresses • A loader binds these relocatable addresses to absolute addresses Chapter 9 Memory Management
9.1.1 Address Binding • Compile time • Absolute code can be generated • Know at compile time where the process will reside in memory • MS-DOS .COM-format programs are absolute code Chapter 9 Memory Management
9.1.1 Address Binding • Load time • Relocatable code can be generated • Not known at compile time where the process will reside in memory • Final binding is delayed until load time Chapter 9 Memory Management
9.1.1 Address Binding • Execution time • The process can be moved from one memory segment to another • Binding must be delayed until run time • Special hardware must be available Chapter 9 Memory Management
9.1.2 Logical- Versus Physical-Address Space • Logical address • An address generated by the CPU • Compile-time and load-time • Also called virtual address • Logical-address space • The set of all logical addresses • Physical address • An address seen by the memory unit • The one loaded into the memory-address unit • Execution time • Logical and physical address spaces differ • Physical-address space Chapter 9 Memory Management
9.1.2 Logical- Versus Physical-Address Space • Memory-Management Unit (MMU) • A hardware device • Run-time mapping from virtual to physical addresses • Different methods to accomplish such a mapping • Logical addresses • 0 to max • Physical addresses • R + 0 to R + max Chapter 9 Memory Management
Dynamic RelocationUsing a Relocation Register Chapter 9 Memory Management
9.1.3 Dynamic Loading • The entire program and data must be in memory for its execution • The size of a process is limited to the size pf physical memory. • Dynamic Loading • All routines are kept on disk in a relocatable load format • The main program is loaded into memory and is executed • A routine is not loaded until it is called • Advantage • An unused routine is never loaded Chapter 9 Memory Management
9.1.4 Dynamic Linking andShared Libraries • Dynamic Linking • Linking is postponed until execution time • Small piece of code, called stub, used to locate the appropriate memory-resident library routine • OS checks if routine is in processes’ memory address • If yes, load the routine into memory • Stub replaces itself with the address of the routine, and executes the routine. • Dynamic linking is particularly useful for libraries. Chapter 9 Memory Management
9.1.5 Overlays • Keep in memory only those instructions and data that are needed at any given time • Needed when process is larger than amount of memory allocated to it • Features • Implemented by user • No special support needed from operating system • Programming design is complex Chapter 9 Memory Management
9.1.5 Overlays Chapter 9 Memory Management
Background Swapping Contiguous Memory Allocation Paging Segmentation Segmentation with Paging Summary Exercises Chapter 9 Memory Management Chapter 9 Memory Management
Swapping • A process can be • Swapped temporarily out of memory to a backing store • Commonly a fast disk • Brought back into memory for continued execution • A process swapped back into • The same memory space • Binding is done at assembly or load time • A different memory space • Execution-time binding Chapter 9 Memory Management
Swapping of two processes Chapter 9 Memory Management
Swapping • Context-switch time is fairly high • User process size: 1MB • Transfer rate: 5MB per second • Actual transfer: • 1000KB / 5000 KB per second = 200 milliseconds • An average latency: 8 ms • Total swap time • 208 + 208 = 416 ms • Time quantum should be substantially larger than 0.416 seconds. Chapter 9 Memory Management
Swapping • Major part of the swap time is transfer time • Directly proportional to the amount of memory swapped • Factors • How much memory is actually used • To reduce swap time • Be sure the process is completely idle • Pending I/O Chapter 9 Memory Management
Background Swapping Contiguous Memory Allocation Paging Segmentation Segmentation with Paging Summary Exercises Chapter 9 Memory Management Chapter 9 Memory Management
Contiguous Memory Allocation • Memory • One for the resident operating system • In either low or high memory • Location of the interrupt vector • One for the user processes • Contiguous memory allocation • Each process is contained in a single contiguous section of memory Chapter 9 Memory Management
Memory Protection • A relocation register with a limit register Chapter 9 Memory Management
Memory Allocation • Fixed-sized partitions • Simplest • Each partition contain exactly one process • Degree of multiprogramming is bounded • Strategies • First fit • Best fit • Worst fit • Problem • External fragmentation • Internal fragmentation • 50-percent rule • Given N allocated blocks • Another 0.5 N blocks will be lost Chapter 9 Memory Management
Memory Allocation • Possible solutions to the external fragmentation • Compaction • Permit the logical address space of a process to be noncontiguous OS OS OS OS process 5 process 5 process 5 process 5 process 9 process 9 process 8 process 10 process 2 process 2 process 2 process 2 Chapter 9 Memory Management
Background Swapping Contiguous Memory Allocation Paging Segmentation Segmentation with Paging Summary Exercises Chapter 9 Memory Management Chapter 9 Memory Management
Paging • A memory-management scheme that permits the physical-address space of a process to be noncontiguous • Frames (physical memory) • Fixed-sized blocks • Pages (logical memory) • Blocks of the same size • Every address is divided into • A page number (p) • An index to a page table • A page offset (d) page number page offset p d m-n n Chapter 9 Memory Management
Paging hardware Chapter 9 Memory Management
Paging Model Chapter 9 Memory Management
Paging Example Chapter 9 Memory Management
Free Frames After Allocation Before Allocation Chapter 9 Memory Management
Background Swapping Contiguous Memory Allocation Paging Segmentation Segmentation with Paging Summary Exercises Chapter 9 Memory Management Chapter 9 Memory Management
Background Swapping Contiguous Memory Allocation Paging Segmentation Segmentation with Paging Summary Exercises Chapter 9 Memory Management Chapter 9 Memory Management
Background Swapping Contiguous Memory Allocation Paging Segmentation Segmentation with Paging Summary Exercises Chapter 9 Memory Management Chapter 9 Memory Management
Background Swapping Contiguous Memory Allocation Paging Segmentation Segmentation with Paging Summary Exercises Chapter 9 Memory Management Chapter 9 Memory Management