240 likes | 343 Views
Operating Systems. Memory Management. Purpose of Memory Management. Provide the memory space to enable several processes to be executed at the same time. Provide a satisfactory level of performance for the system users. Protect each process from one another.
E N D
Operating Systems Memory Management
Purpose of Memory Management • Provide the memory space to enable several processes to be executed at the same time. • Provide a satisfactory level of performance for the system users. • Protect each process from one another. • Enable sharing of memory space between processes. • Make the addressing of memory space transparent to programmers.
Caching • Main memory is the primary area for holding the instructions and data that processes are using. • The cache memory is smaller (< 64 kB) and faster than main memory. • Cache memory is used to store data and instructions that are currently being used, or are predicted to be used shortly. • The processor will look for information in the cache before looking in main memory.
Caching • Modern computer systems can make use of a cache in three places :- • The Level One (L1) Cache is inside the CPU chip. • The Level Two (L2) Cache is external to the CPU chip but is between the L1 Cache and the main memory. • A Hard Drive Cache is between the Hard Drive and the CPU.
Memory Management Algorithms • Single-Process System • Process to be executed is loaded into the free space area of the memory. • Used by MS-DOS. • Fixed Partition Memory • Divide memory into a number of fixed areas of different size • Each area holds an active process. • Each area may contain unused space resulting in internal fragmentation. • A process may not be able to run because it cannot find a large enough partition. • Used by IBM 360s.
Memory Management Algorithms • Variable Partition Memory • A process is allocated the exact amount of memory it requires. • Processes are loaded into consecutive areas until memory is full. • As a process terminates, the space it occupies is available for the use of a new process. • A new process may not need the entire free area, leaving holes, resulting in external fragmentation. • A process may not be able to run because there are not enough consecutive free areas.
Memory Management Algorithms • Simple Paging • Each process is divided into a number of fixed chunks, called pages. • Memory is divided into a set of page frames of the same size. • The size of a page is typically 4 kB. • A process does not have to be loaded into consecutive page frames. • Simple Paging is the most common algorithm in use.
Virtual Memory • Using simple paging a process can be loaded in parts. • Only the portions of a process that are being referenced at an instant need to be present in memory. • Therefore a process can have available more memory than is physically present. • The memory that a process has available to it is referred to as its virtual memory space.
Virtual Memory • The size of the virtual memory space is determined by the processor’s bit-width. • A 32-bit processor, like the Pentium IV, has 232 bytes or 4 GB of virtual memory. A 64-bit processor can have 264 bytes or 16 exabytes of virtual memory. • Most desktop computers have 512 MB (229 bytes) of physical memory (RAM) or less. • The operating system uses an area on disk called the paging file or swap file to allocate additional memory to processes.
Managing the Swap File in Windows • Microsoft recommends that the paging file size be at least 1.5 times the size of real memory. • On Windows 95/98 there was a choice between letting the OS automatically control the paging file size or to specify a custom size. • On Windows 2000, a custom paging file size is created by setting an initial and maximum size. • On Windows XP, either a custom, system managed or no paging file can be created.
Memory Addressing • A virtual memory address is in the form (p,o) • p is the number of the page containing the memory location • o is the offset of the location from the start of the page • For a 32-bit address space with a 4 kB page size: • o will occupy 12 bits (4 kB = 212) • p will occupy 20 bits.
The Memory Management Unit (MMU) • When a page is loaded into real memory, the virtual page number is translated into a physical page number. • This translation is done by a module of the CPU called the Memory Management Unit (MMU). • The MMU maintains page tables that map how the virtual page translates to a physical page. • The page tables also keep track of whether a virtual page is associated with a physical page and when it was last accessed.
Address Translation • The page number portion of the address is further sub-divided into a page directory index and a page table index. • The MMU first locates a table known as the page directory. • Each process has its own private page directory, and the address of that directory is stored in its PCB. • The MMU uses the page directory index to locate an entry in the page directory table. • The MMU retrieves from the page directory the location of the page table.
Address Translation • The MMU uses the page index to locate an entry in the page table. • The MMU retrieves from the page table the address of the page frame in physical memory. • The MMU uses the page byte offset as an index into the physical page and isolates the data that the process wants to reference.
Address Translation • Why a 3-step process? • Only a process' page directory must be fully defined. • Page tables are defined only as necessary. • If the majority of a process' 4 GB address space is unallocated, a significant saving in memory results because page tables are not allocated to define the unused space. • Otherwise 4 MB would be required to allocate each process’ page tables in a 32-bit address space.
Translation Look-Aside Buffer (TLB) • The three-step translation process would cause a system's performance to be unbearably poor if the process occurred on every memory access. • Instead the processor has a translation look-aside buffer (TLB) which stores the most recent virtual page to physical page translation. • When a process makes a memory reference, the MMU takes the virtual page number and simultaneously compares it with the virtual page number of every translation pair stored in the TLB. • If there is a match, the MMU can bypass the page directory and page table lookups because it has already obtained the page frame number from the TLB.
Mechanics of Virtual Memory • If a process needs an instruction or data from page p1 which is not already in memory, a page fault is generated, the OS then does the following: • Find out where the contents of page p1 are stored on disk. • Use a page replacement algorithm to choose another page p2 mapped to some frame f of physical memory. • Copy the contents of frame f out to disk. • Update the page tables to indicate that p2 is no longer associated with a physical page. • Copy p1's data from disk to frame f. • Update the page tables so that p1 is mapped to frame f.
Locality of Reference • Processes exhibit a characteristic known as locality of reference. • Over intervals of time address references made by a process tend to cluster around narrow ranges contained in a few pages. • If memory references jumped around virtual space at random, there would be a disk read and write for each new reference and virtual memory would be as slow as a disk.
Page Replacement Policy • Pages must be removed whenever physical memory is full of in-use pages and a process requires a page not in physical memory. • Two characterizations for replacement policies are global and local. • In a global replacement policy, the MMU considers all pages of physical memory as replacement candidates. • In a local replacement policy, the MMU considers as replacement candidates only pages belonging to the process that is accessing the page to be brought in.
Page Replacement Policy • Page Replacement Policies are further characterized by the algorithm used to choose a page to remove. • First-in First-Out (FIFO) algorithm • Select for removal the page which has been in memory for the longest time. • Implement as a linked-list queue, the page at the head of the queue is the oldest. • A heavily used page will be periodically removed, thereby resulting in a page fault the next time it is needed.
Page Replacement Policy • Clock or Second Chance algorithm • Variation of FIFO that uses a circular queue. • An entry in the queue has a “used” bit. • When a page is first loaded set its used bit to zero. • When the page frame is subsequently referenced set its used bit to one. • When a page replacement is required go round the list until a used bit of zero is found, change all intermediate used bits from one to zero. • Essentially, what second-chance does is, as its name suggests, giving every page a "second-chance" - an old page which has been referenced is probably in use, and should not be swapped out over a new page which has not been referenced
Page Replacement Policy • Least Recently Used (LRU) algorithm • Select for replacement the page whose time since last reference is greatest. • Requires that a time stamp recording is made for a page frame at the time of each reference. • The overhead of maintaining a time stamp and finding the oldest value is very high.
Page Replacement Policy • Not Recently Used (NRU) algorithm • Each page frame has associated with it a “page referenced” bit. • At intervals, the OS resets all if these bits to zero. • Subsequent reference to a page will set its page referenced bit to one. • When a page fault occurs, a page with the bit set to zero is selected for replacement. • If a page fault occurs right after the bits are reset, unable to determine if a page was recently used.
Thrashing • Thrashing occurs when the total memory requirement of all the processes is significantly greater than physical memory. • The CPU spends more time swapping pages than doing productive work. • This is usually an indication that the computer requires more physical memory (RAM).