550 likes | 689 Views
“ Memory Management Function ”. Presented By Lect. Rimple Bala GPC,Amritsar. Contents. Basic memory management Swapping Virtual memory Paging Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems
E N D
“Memory Management Function” Presented By Lect. Rimple Bala GPC,Amritsar
Contents • Basic memory management • Swapping • Virtual memory • Paging • Page replacement algorithms • Modeling page replacement algorithms • Design issues for paging systems • Implementation issues • Segmentation Memory Management
Memory Management • Memory – a linear array of bytes • Holds O.S. and programs (processes) • Each cell (byte) is named by a unique memory address • Recall, processes are defined by an address space, consisting of text, data, and stack regions • Process execution • CPU fetches instructions from the text region according to the value of the program counter (PC) • Each instruction may request additional operands from the data or stack region
Memory Management • Ideally programmers want memory that is • large • fast • non volatile • Memory hierarchy • small amount of fast, expensive memory – cache • some medium-speed, medium price main memory • gigabytes of slow, cheap disk storage • Memory manager handles the memory hierarchy
Multiprogramming with Fixed Partitions • Fixed memory partitions • separate input queues for each partition • single input queue
Modeling Multiprogramming CPU utilization as a function of number of processes in memory Degree of multiprogramming
Names and Binding • Symbolic names Logical names Physical names • Symbolic Names: known in a context or path • file names, program names, printer/device names, user names • Logical Names: used to label a specific entity • job number, major/minor device numbers, process id (pid), uid. • Physical Names: address of entity • inode address on disk or memory • entry point or variable address • PCB address
Address Binding • Address binding • fixing a physical address to the logical address of a process’ address space • Compile time binding • if program location is fixed and known ahead of time • Load time binding • if program location in memory is unknown until run-time AND location is fixed • Execution time binding • if processes can be moved in memory during execution • Requires hardware support
1000 1100 1175 Library Routines 0 100 175 Library Routines Compile Time Address Binding P: : push ... jmp 1175 : foo: ... P: : push ... jmp 175 : foo: ... Load Time Address Binding Execution Time Address Binding 1000 1100 1175 Library Routines 0 100 175 Library Routines P: : push ... jmp 1175 : foo: ... P: : push ... jmp 175 : foo: ... Base register 1000
Dynamic relocation with a base register • Memory Management Unit (MMU) - dynamically converts logical addresses into physical address • MMU contains base address register for running process Relocation register for process i Max Mem 1000 Max addr process i 0 Program generated address + Physical memory address MMU Operating system 0
Protection using base & limit registers • Memory protection • Base register gives starting address for process • Limit register limits the offset accessible from the relocation register limit base register register Physical memory logical address address yes + < no addressing error
Logical vs. Physical Address Space • The concept of a logical address space that is bound to a separate physical address space is central to proper memory management. • Logical Address: or virtual address - generated by CPU • Physical Address: address seen by memory unit. • Logical and physical addresses are the same in compile time and load-time binding schemes • Logical and physical addresses differ in execution-time address-binding scheme. Edusat lect. Rimple GPC,Amritsar
Features of Memory Management • Relocation 1. Static 2. Dynamic • Protection • Sharing • Logical organization • Physical organization • Memory Compaction
Various Methods used by OS • Swapping • Virtual Memory • Paging • Segmentation
Swapping Memory allocation changes as • processes come into memory • leave memory Shaded regions are unused memory
Swapping • Allocating space for growing data segment • Allocating space for growing stack & data segment
Memory Management with Bit Maps • Part of memory with 5 processes, 3 holes • tick marks show allocation units • shaded regions are free • Corresponding bit map • Same information as a list
Memory Management with Linked Lists Four neighbor combinations for the terminating process X
Managing memory with linked lists Searching the list for space for a new process • First Fit • Next Fit • Start from current location in the list • Best Fit • Find the smallest hole that will work • Tends to create lots of really small holes • Worst Fit • Find the largest hole • Remainder will be big • Quick Fit • Keep separate lists for common sizes
Virtual Memory • Virtual Memory • Separation of user logical memory from physical memory. • Only PART of the program needs to be in memory for execution. • Logical address space can be much larger than physical address space. • Need to allow pages to be swapped in and out. • Virtual Memory can be implemented via • Paging • Segmentation
Virtual Memory The position and function of the MMU
Paging The relation betweenvirtual addressesand physical memory addresses given bypage table
Page Tables Internal operation of MMU with 16 4 KB pages
Page Tables • 32 bit address with 2 page table fields • Two-level page tables Top-level page table
Page Tables Typical page table entry
TLBs – Translation Look aside Buffers A TLB to speed up paging
Inverted Page Tables Comparison of a traditional page table with an inverted page table
Page Replacement Algorithms • Page fault forces choice • which page must be removed • make room for incoming page • Modified page must first be saved • unmodified just overwritten • Better not to choose an often used page • will probably need to be brought back in soon
Page Replacement Algorithms • Optimal Page Replacement • Not Recently Used Page Replacement Algorithms. • FIFO Page Replacement Algorithms • Second Chance Algorithms • Clock Page Replacement Algorithms • Least Recently used
Optimal Page Replacement Algorithm • Replace page needed at the farthest point in future • Optimal but unrealizable • Estimate by … • logging page use on previous runs of process • although this is impractical
Not Recently Used Page Replacement Algorithm • Each page has Reference bit, Modified bit • bits are set when page is referenced, modified • Pages are classified • not referenced, not modified • not referenced, modified • referenced, not modified • referenced, modified • NRU removes page at random • from lowest numbered non empty class
FIFO Page Replacement Algorithm • Maintain a linked list of all pages • in order they came into memory • Page at beginning of list replaced • Disadvantage • page in memory the longest may be often used
Second Chance Page Replacement Algorithm • Operation of a second chance • pages sorted in FIFO order • Page list if fault occurs at time 20, A has R bit set
Least Recently Used (LRU) • Assume pages used recently will used again soon • throw out page that has been unused for longest time • Must keep a linked list of pages • most recently used at front, least at rear • update this list every memory reference !! • Alternatively keep counter in each page table entry • choose page with lowest value counter • periodically zero the counter
Modeling Page Replacement Algorithms Belady's Anomaly • FIFO with 3 page frames • FIFO with 4 page frames • P's show which page references show page faults
Stack Algorithms State of memory array, M, after each item in reference string is processed 7 4 6 5
Page Size Small page size • Advantages • less internal fragmentation • better fit for various data structures, code sections • less unused program in memory • Disadvantages • programs need many pages, larger page tables
page table space internal fragmentation Optimized when Page Size • Overhead due to page table and internal fragmentation • Where • s = average process size in bytes • p = page size in bytes • e = page entry
Shared Pages Two processes sharing same program sharing its page table
Implementation Issues Four times when OS involved with paging • Process creation • determine program size • create page table • Process execution • MMU reset for new process • TLB flushed • Page fault time • determine virtual address causing fault • swap target page out, needed page in • Process termination time • release page table, pages
Backing Store (a) Paging to static swap area (b) Backing up pages dynamically
Separation of Policy and Mechanism Page fault handling with an external pager
Segmentation • One-dimensional address space with growing tables • One table may bump into another
Segmentation Allows each table to grow or shrink, independently
Segmentation with Paging: MULTICS A 34-bit MULTICS virtual address
Segmentation with Paging: MULTICS Conversion of a 2-part MULTICS address into a main memory address