310 likes | 332 Views
This chapter delves into advanced memory allocation schemes, such as demand paging and segmentation, that enhance system efficiency and reduce overhead. Learn about key tables, like JT, PMT, and MMT, used for memory mapping. Understand the advantages and disadvantages of paged memory allocation and demand paging. Explore the complexities of page resolution and address translation. Discover how page replacement policies impact system performance.
E N D
Chapter 3 - Memory Management, Recent Systems CIS106 Microcomputer Operating Systems Gina Rue CIS Faculty Ivy Tech State College Northwest Region 01
Memory Manager Early memory allocation schemes: • required storing entire programs in main memory in contiguous locations Causing problems - • fragmentation • overhead of relocation
Memory Manager More sophisticated memory allocation schemes: • remove the restriction of storing programs contiguously • eliminate the need for an entire program reside in memory during execution
Memory Manager Four types of more sophisticated memory allocation schemes • paged • demand paging • segmented • segmented/demand paged See Fig. 3.1 p.43
Paged Memory Allocation • Based on concept of dividing each incoming job into pages of equal size • page size can be the same as memory block size, same as the sections of a disk (sector) where a job is stored • page frames are sections of main memory
Paged Memory Allocation • One sector will hold one page of job instructions and fit into one page frame of memory Before executing a program, the Memory Manager prepares it by: • determining the # of pages in the program; • locating enough empty page frames in main memory; • loading all of the programs into them (in “static” paging the pages need not be contiguous)
Paged Memory Allocation Memory Manager uses tables to keep track of how a job’s pages fit into page frames in memory • Job Table (JT) • Page Map Table (PMT) • Memory Map Table (MMT) All three tables reside in the part of main memory that’s reserved for the OS See Fig. & Table 3.1 p.45
Paged Memory Allocation • Job Table (JT) Contains two entries for each active job: • Job size • memory location where the Page Map Table (PMT) is stored This dynamic list grows as jobs are loaded into the system and shrinks as they’re later completed See Table 3.1 p.45
Paged Memory Allocation • Page Map Table (PMT) Each active job has its own PMT that contains: • page # & its corresponding page frame memory address • one entry per page • page #s are sequential (Page 0, Page 1, Page 3…) The first entry is the PMT lists the page frame memory address for Page 0
Paged Memory Allocation • Memory Map Table (MMT) One entry for each page frame • lists the location and free/busy status for each one • at compilation time every job is divided into pages Example: Job1 - Page 0 contains 1st hundred lines Displacement, or offset, of lines (how far away a line is from the beginning of its page) is a factor used to locate that line within its page frame See Fig. 3.2 p.47
Paged Memory Allocation Each time and instruction is executed or a data value is used, the OS or (hardware) must: • translate the job space address • which is relative • into a physical address • which is absolute, also known as “resolving the address” or address resolution
Paged Memory Allocation • Advantages • allows jobs to be allocated in noncontiguous memory locations • memory is used more efficiently, more jobs can fit into main memory • Disadvantages • overhead is increased • internal fragmentation • difficult to determine best page size for optimal use of resources
Demand Paging • First widely used scheme to load only part of the program into memory for processing • Jobs are divided into equal sized pages that initially reside in secondary storage • As jobs begin to run, its pages are brought into memory only as they are needed • While one section of a job is being processed, all other modules are idle
Demand Paging • Has made virtual memory widely available • Allows users to run jobs with less memory • Gives the appearance of an almost-infinite/non-infinite memory • Successful implementation requires use of high speed direct access storage device that can work directly with the CPU
Demand Paging How and when are pages passed (or “swapped” )? • depends on predefined policies that determine when to make room for needed pages and how to do so • OS relies on tables (JT,PMT,MMT) to implement the algorithm
Demand Paging To move a new page, a resident page must be swapped back into secondary storage requires close interaction between: • hardware components • software algorithms • policy scheme When a page is in secondary storage, but not in memory, the OS takes over to solve the problem • Page Interrupt Handler: if all page frames are busy, decides which page will be swapped out See Fig. 3.5 p.52
Demand Paging Solution to inefficient memory utilization, but not problem free: • Thrashing occurs • when excessive amount of page swapping between main memory & secondary storage • when page removed from memory but is called back shortly after • Page Fault • a failure to find a page in memory See Fig. 3.6 p.54
Page Replacement Policies& Concepts Crucial to the efficiency of the system & the algorithm must be carefully selected Two most well known algorithms • FIFO: first-in first-out policy based on theory best page to remove is the one that has been in memory longest • LRU: last-recently used policy chooses the last recently accessed to be swapped out
Page Replacement Policies& Concepts Two most well known algorithms • FIFO: first-first out policy • There is no guarantee that buying more memory will always result in better performance (FIFO or Belady’s anomaly) • LRU: last-recently used policy • Stack algorithm removal policy which functions in a way that increasing main memory will cause either a decrease in or the same # of pages interrupts
Page Replacement Policies& Concepts • Mechanics of Paging To determine which pages will be swapped, Memory Manager needs specific information included in the PMT See Table 3.5 p.58
Page Replacement Policies& Concepts • PMT includes 3 bits: • status-indicates whether the page is currently in memory • reference-indicates whether the page has been “called” (referenced) recently • modified-indicates whether the contents of the page have been altered & is used to determine if the page must be written to secondary storage See Table. 3.6 p.58
Page Replacement Policies& Concepts • Swapping pages • FIFO algorithm uses only the modified bit and status bit • LRU looks at all three (status, modified, & referenced bit) before deciding which pages to swap out See Table 3.7 p.59
Page Replacement Policies& Concepts • The Working Set Improved performance of page demanding schemes Is the set of pages residing in memory that can be accessed directly without incurring a page fault See Fig. 3.7 p.60
Segmented Memory Allocation Segmenting is based on programmers use of structuring their programs in modules (logical groupings of code) • each job is divided into several segments • subroutine is an example of a logical group • size of memory segments are each different • when a program is compiled or assembled, the segments are set up according to the program’s structural modules • each segment is numbered and a Segment Map Table (SMT) is generated for each job
Segmented Memory Allocation The Memory Manager needs to keep track of segments in memory with three tables combining aspects of dynamic & demand paging memory management: • The Job Table lists every job in process (one for the whole system • The Segment Map Table lists details about each segment (one for each job) • The Memory Map Table monitors the allocation of main memory (one for the whole system)
Segmented/Demand Paged Memory Allocation A combination of segmentation and demand paging • offers the logical benefits of segmentation • has physical benefits of paging This scheme requires 4 tables: • JT lists every job in process • SMT lists details about each segment • PMT lists details about every page • MMT monitors the allocation of the page frames in main memory Associative memory is used to speed up the process
Virtual Memory • Gives the user the appearance programs are being completely loaded in main memory during entire processing time • During 2nd generation, programmers started dividing their programs into sections that resembled working sets, segments called “overlays” • the programs could begin with only the first overlay loaded into memory
Virtual Memory • Works well in multiprogramming environment • Waiting time isn’t lost • CPU moves on to another job • Increased programming techniques for development of large software systems because individual pieces can be created independently and linked together later
Virtual Memory Advantages • job size no longer restricted • memory used more efficiently • unlimited amount of multiprogramming • eliminates external fragmentation & minimizes internal fragmentation • allows the sharing of code and data • facilitates dynamic linking of program segments
Virtual Memory Disadvantages • Increased hardware costs • Increased overhead for handling paging interrupts • Increased software complexity to prevent thrashing
Summary • Memory Manager has the task of allocating memory to each job to be executed and reclaiming it when execution is completed • Each scheme was designed to address a different set of processing problems, but when some were solved, others were created See Tab. 3.8 p.69 compare