230 likes | 339 Views
Phones OFF Please. Memory Management Parminder Singh Kang Home: www.cse.dmu.ac.uk/~pkang Email: pkang@dmu.ac.uk. Concept of Memory Management CPU Scheduling improves both utilization of CPU and response time of System.
E N D
Phones OFF Please Memory Management Parminder Singh Kang Home: www.cse.dmu.ac.uk/~pkang Email: pkang@dmu.ac.uk
Concept of Memory Management • CPU Scheduling improves both utilization of CPU and response time of System. • Hence, to maintain performance, There must be several processes in memory. • Selection of memory management algorithm specific system depends on • many factors; especially hardware design of system. • In order to execute an instruction; it follows defined procedure of fetch, • decode and execute. Once instruction is executed results may be stored • back in memory.
2. Logical and Physical Address Space • Logical address (virtual address) is the address generated by CPU. • Whereas Physical address is the address loaded into memory address register. • In other words address seen by memory unit. • Compile time and load time binding generates identical logical and • physical address. • Execution time address binding scheme results in different logical • and physical addresses. • Memory management unit manages run time mapping from • virtual to physical address. • How Logical to Physical Address mapping works?
Relocation Register Memory offset Logical Address Physical Address CPU + Logical Address + Offset MMU
3. Dynamic Loading • Why Dynamic Loading? • Data and program must be in physical memory for execution. • But size of process is limited by size of physical memory available • at given instance. • Dynamic loading is used for better space utilization. In dynamic loading, a routine • Is not loaded until it is called. • How it works? • All routines are kept on disk in re-locatable format. When a routine is needed; • calling routine first checks if called routine is already loaded. • If not, required routine is loaded by re-locatable link loader and update process • table inorder to reflect changes. • The advantage of dynamic loading is that an unused routine is never loaded.
4. Swapping • Concept: • A process needs to be in memory to be executed. Process can be swapped • temporarily out of memory to a backup storage (hard disk drive) and later loaded • into memory again for continued execution. • Selecting process for swapping: • Process selection according to priorities. • higher priority executes before lower priority process. • Selection of memory location • same memory location (if binding is done at load time or assembly time) • different location (if binding is done at execution time).
Operating System Hard Disk Drive P1 Swap Out Process P1 P2 Swap In Process P2 User Space • Major part of Swap time is transfer time. • In order to reduce swap time, one can limit swapping to required process • instead of swapping all process.
5. Virtual Memory • Need of Virtual Memory: • Sophisticated computer applications (e.g. engineering CAD) have always • required more main memory. • the idea being to give each user each of their processes as much memory • as they like - almost. • Concept: • Virtual makes use of a phenomenon known as locality of reference. • in which memory references of both instructions and data tend to cluster. • (a) Instruction execution is localized either within loops or • heavily used subroutines, and • (b) Data manipulation is on local variables or upon tables or • arrays of information.
Implementation: • use a technique called paging • program and data is broken down into 'pages'. • Pages are stored on HDD. • Pages are then brought into main memory as required and 'swapped' • out when main memory is full. • Problem (Thrashing) • As the number and/or size of concurrent programs increases • System spends all its time swapping pages to and from disk. • It is therefore important to configure sufficient physical memory even under • a virtual memory environment. • This problem often becomes apparent over a period of time as new • releases of software (including the operating system) are mounted • Note: it is always defined in OS specifications minimum memory required.
Example: • Process may access a very large address space. • with portions of this address space brought into real memory as required. • the following is a simplified diagram showing pages in virtual address space; • The key aspect of virtual memory is that the addresses the program references - instruction addresses and data
Role of MMU • Convert addresses in the users program (in virtual address space) into • real addresses in physical memory • Generate a 'page fault' when the users program executing in real memory • attempts to access a logical address which is not in physical memory; • operating system then reads the page in from disk (swapping out a page • if necessary, e.g. using a replacement strategy such as least recently used).
6. Paging • The virtual address space is split up into fixed sized pages. • Page table keeps page number as an index to keep track of present pages. • Physical memory is divided into fixed size blocks called frames. • and virtual memory is divided into blocks of same size called pages. • on process execution, its pages are loaded into available memory frames. • Every address generated by CPU is divided into two parts; • a page numberP and page offset (d). • Page number is an index into a page table. The page table contains base address for each page in physical memory. • base address is combined with page offset to define the physical memory address that is sent to the memory unit.
7. Segmentation • In segmentation program is logically partitioned with segments representing • procedures/functions or data structures (of varying length). • Advantage: • Segments would tend to be referenced as a whole for a period of time. • Hence, logical units swapped in/out as a whole. • This also facilitates sharing of objects among multiple processes. • access restrictions for whole segments; • read only for procedures and constant data areas • read/write for variable data • Many memory management strategies use a combination of • segmentation and paging • hence program is first divided into logical segments and then fixed length paging • applied to segments
s Limit base CPU s d Segment table < Yes + Physical Memory No
Maintaining Track of free memory • Using linked lists. • First fit • This technique uses two linked lists arranged in ascending address order: • the free list: each node holds start address and size of each free memory block • the in-use list: each node holds start address and size of each used block • Processes would request the allocation of a memory block by calling a • function allocate_memory, e.g. an outline algorithm would be: • search free list for block > required size • if found • subtract required size from size of free block • if remainder = 0 • remove block from free list • else • update start address and size of new free block • create new node in the in-use list
Similarly to de-allocate a memory block in the in-use list (assuming • complete block is de-allocated) a call to release_memory would be made: • remove node from in-use list • look down free list and create new node • if possible merge with previous node • if possible merge with next node
Page replacement algorithms • When a process attempts to access a logical page; which is not in physical • memory a page fault occurs. • The process is stopped, the OS reads the required page into physical memory • and the process then continues. • If the memory is full; either OS needs to discard process or swap to disk • before reading the required page from disk. • The optimal solution is to replace the page • How to select Page for Replacement: • Each page in memory usually has two bits associated with it: • R referenced bit: set when page is accessed (read/written) • M Modified bit: set when page is written to
These bits would be set by the MMU hardware when a page is read/written and • cleared by the OS - in particular the OS clears the R bits on every clock interrupt. • When a page fault occurs there are four possible scenarios: • R = 0, M = 0: not referenced, not modified • R = 0, M = 1: not referenced, but modified • R = 1, M = 0: referenced, but not modified • R = 1, M = 1: referenced and modified • NRU (Not Recently Used) algorithm • If we have any R = 0 and M = 0 pages swap one of these • else • if we have any R = 0, M = 1 pages swap one of these • else • if we have any R = 1, M = 0 pages swap one of • these else swap one of the R = 1, M = 1 pages • Note that any page with M = 0 can be overwritten since that have • not been written to since being read from disk.
FIFO (First in, First out) algorithm • LRU (Least Recently Used) algorithm: • The idea is to swap out or discard the page that has been least used. • On a page fault swap page with lowest count. • implemented using MMU. • NFU (Not Frequently Used) algorithm • Similar to LRU but implemented in software. • the OS adds the R bit to the page counter on each clock interrupt • On a page fault swap page with lowest count. • LRU and NFU problem • A modified LRU/NFU introduces aging by on every clock interrupt shifting the counter right one bit (dividing it by two) then adding the R bit. The penalty is that this takes processor power.
8. Thrashing • If process does not have enough number of frames at given time t; • it will quickly page fault. At this point, it must replace some page. • As result, it quickly faults again, again, and again. • The process continues to fault, replacing pages for which it then faults and • brings back in right away. • This high paging activity is called trashing. A process is trashing if it is spending more • time paging than executing. • Operating system monitors CPU utilization. If CPU utilization is too low, • introducing a new process to system can increase degree of multiprogramming. • Degree of multiprogramming can be limited by trashing.
Trashing CPU Utilization Degree of multiprogramming