380 likes | 570 Views
Memory Management. - Asiri Rathnayake. Lecture content. Memory management (Basics) Introduction Relocation / Addressing Protection / Sharing Placement / Replacement Memory management techniques Partitioning Fixed partitioning Dynamic partitioning Buddy system Simple paging
E N D
Memory Management - Asiri Rathnayake
Lecture content • Memory management (Basics) • Introduction • Relocation / Addressing • Protection / Sharing • Placement / Replacement • Memory management techniques • Partitioning • Fixed partitioning • Dynamic partitioning • Buddy system • Simple paging • Simple segmentation • Virtual memory
From the text (Chapter 07/08) • 7.1 Memory management requirements • 7.2 Memory partitioning • 7.3 Paging • 7.4 Segmentation • 8.1 Hardware and control structures (read) • Inverted page tables • Page size • 8.2 Operating system software (read) • Basic page replacement algorithms • 8.3 Unix and Solaris memory management (optional) • 8.4 Linux memory management (optional) • 8.5 Windows memory management (optional)
Memory management (Basics) • Introduction • Relocation / Addressing • Protection / Sharing • Placement / Replacement
Introduction • A program needs to be brought into memory for execution • In a single-tasking operating system memory management is trivial. But in a multi-tasking operating system it is a major concern (why?) • In memory management we are concerned about subdividing main memory to accommodate multiple processes • It is important to manage memory in such a way that there is a steady supply of ready processes for the processor (why?)
Introduction… • Usually the task of memory management involves both software and hardware components • Note: • Until the discussion of virtual memory, we assume that the whole process image should be loaded in memory for execution
Relocation / Addressing • A program consists of a sequence of instructions and some of these instructions refer other memory locations • A function within the same program. • A variable pointing to a data structure in heap. • A global variable • More…
Relocation / Addressing… • What if these memory references contain actual physical memory addresses (locations in RAM)? • It would be required that the process image be loaded into a specific area of main memory to be executed. • There for, memory references within a process use logical addresses as opposed to physical addresses • These logical addresses generated by the process at runtime (addresses emitted by CPU) are translated into physical addresses by MMU • This scheme allows process images to be relocated into any area of main memory as required
Protection / Sharing • A memory management scheme should be capable of providing protection between processes • A process should not be allowed to read/write memory locations of some other process without permission • Still, the memory management scheme should be flexible enough to provide sharing of memory between processes when required – there are advantages of sharing memory between processes
Placement / Replacement • A memory management scheme usually keeps track of free memory slots currently available in the system • The algorithm / policy used in allocating a free memory slot to a newly admitted process is known as the placement scheme • The algorithm / policy used in replacing (swapping out) an existing process in order to provide room for a higher priority process is known as the replacement scheme
Memory management techniques • Partitioning • Fixed partitioning • Dynamic partitioning • Buddy system • Simple paging • Simple segmentation
Partitioning – fixed partitioning • A very simplistic memory management scheme • Main memory is divided into partitions at the system generation time (Hardware) • Either equal-size partitions or unequal-size partitions may be used
Partitioning – fixed partitioning… • Having equal-size partitions poses two major problems: • A program may be too big to fit in a partition. • Main memory utilization is extremely inefficient due to internal fragmentation – no matter how small the process image is, it will still occupy a whole partition (internally un-used space). • Unequal-size partitions can lessen both of these issues (how?)
Partitioning – fixed partitioning… • Placement algorithm: • With equal-size partitions it’s a trivial task; simply allocate a free partition. • With unequal-size partitions the smallest available partition that can hold the process image is selected. • Replacement algorithm: • With equal-size partitions it’s a scheduling decision among all partitions. (provided: relocation is possible) • With unequal-size partitions it’s a scheduling decision among all large-enough partitions. (provided: relocation is possible)
Partitioning – dynamic partitioning • When a process is brought into memory, it is allocated a contiguous chunk of memory to suit its need • The operating system keeps track of free memory slots available for new processes • Dynamic partitioning suffers from external fragmentation – as time goes on, memory external to all partitions (free memory) becomes too much fragmented • A point will arrive where a new process can’t be admitted even if there is enough total free memory in the system • There for it is necessary to perform a costly compaction operation from time to time to make sure that the available free memory is in a contiguous chunk
Partitioning – dynamic partitioning… • Placement algorithm: placement algorithm has a huge impact on the level of external fragmentation. • Best fit: Select the smallest free memory slot that is large enough. • External fragmentation is high (why?) – worst performer • First-fit: Select the first free memory slot that is large enough. • Litters up the front of the free memory list which is a good thing (why?) – best performer • Next-fit: Start from the previous allocation point and select the next free memory slot that is large enough. • Litters up the end of the free memory slot – slightly behind first-fit • Replacement algorithm: not discussed
Partitioning – buddy system • A reasonable compromise to both fixed and dynamic partitioning schemes: • Memory is allocated in powers of 2 • 2L = smallest size block that can be allocated • 2U = largest block that can be allocated (size of the entire memory available for processes) • When a process of size S needs to be allocated: • If 2U > S ≥ 2U-1 the entire memory block is allocated • Else memory is partitioned into two buddies of size 2U-1 • If 2U-1 > S ≥ 2U-2 first buddy of above two buddies is allocated • The process continues until a suitable memory block is found
Partitioning – buddy system… • Buddy system is a reasonable compromise to both fixed and dynamic partitioning schemes • It reduces internal fragmentation – we are allocating the smallest block of memory which is a power of 2 • Less external fragmentation – free memory tend to line up to the end of main memory
Partitioning (relocation) • In all three schemes we discussed above (fixed, dynamic, buddy) relocation can be achieved by using a relative addressing scheme • A relative address is a form of logical address which is expressed as a relative quantity with respect to a pivot point • Ex. 0x0001 memory location relative to the beginning of code segment means the actual physical memory location is 1 + physical memory location of the code segment.
Simple paging • Main memory is partitioned into relatively small equal-sized chunks known as frames • Process images are also partitioned into equal-sized chunks of the same size known as pages • The operating system maintains a list of free frames available for allocation • With paging, internal fragmentation is minimal (on average half the page size) and external fragmentation is not possible
Simple paging… • Unlike previous memory management schemes paging does not require the whole process image to be continuous • Instead, a pages to frames mapping table (page table) is maintained by the operating system for each process
Simple paging… • A logical memory address consists of two parts; the page number and the offset within that page
Simple paging… • By setting page size to a power of 2, address translation (which is done in hardware) can be simplified as shown below:
Simple segmentation • The process image is divided into unequal-sized memory chunks known as segments • These segments are loaded into memory independent of one another • Somewhat similar to dynamic partitioning - not the whole process image but individual segments are allocated continuous chunks of memory • While paging is transparent for the programmer, segmentation is a visible mechanism; it allows programmers to organize programs and data into individual modules that can be treated differently • Ex. Share a particular library code between two processes by loading it into a separate shared segment (read only)
Simple segmentation… • A logical address consists of two parts; the segment number and the offset within the segment
Simple segmentation… • Since different segments can be of different lengths the segment table (hence the address translation process) is bit more complex:
Virtual memory • Both paging and segmentation has two properties that make virtual memory possible: • Use of logical addresses within a process – relocation is made possible • A process image is divided into multiple pieces • The basic observation behind virtual memory is very simple: • It is not necessary that all of the pages or all of the segments of a process be in main memory during execution • Also, memory references within a program tends to be local; they use to cluster with respect to time (principle of locality) – so it is only necessary to have those pages or segments in memory for that duration
Virtual memory… • Operation: • Operating system begins by bringing in one or few pages / segments to include the initial program and data sections • Execution begins and things proceed smoothly as long as all referenced memory locations are in working set / resident set (current set of pages in memory) • When the processor encounters a logical address that is not in main memory, an interrupt is generated indicating a memory access fault • The process is put in a blocking state until the required page / segment is brought into memory (I/O) • When the requested page / segment is brought into memory the process becomes ready to execute again
Virtual memory… • It might be necessary that when a new page / segment is brought into memory an existing page be swapped out (why?) • If the swapped out page / segment is re-referenced shortly after, it has to be brought back again! • The operating system should be clever enough not to swap out such a page / segment • A badly designed operating system can lead to a situation known as thrashing where the operating system spends most of its time swapping in and out pages rather than doing useful processing • This is the subject of replacement algorithms, these algorithms try to exploit principle of locality
Virtual memory… • Despite all the technical issues that need to be dealt with virtual memory, there are two distinct advantages that make virtual memory an attractive concept: • More processes may be maintained in main memory – high degree of multiprogramming • A process can be larger than all of main memory available