640 likes | 653 Views
Learn about organizing main memory, memory management techniques like paging and segmentation, and details of the Intel Pentium processor supporting both. Understand address spaces, relocation, loading, linking, and swapping in memory management.
E N D
Chapter 8: Main Memory Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium
Objectives To provide a detailed description of various ways of organizing memory hardware To discuss various memory-management techniques, including paging and segmentation To provide a detailed description of the Intel Pentium, which supports both pure segmentation and segmentation with paging
Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only storage CPU can access directly Register access in one CPU clock (or less) Main memory can take many cycles Cachesits between main memory and CPU registers Protection of memory required to ensure correct operation
Base and Limit Registers A pair of baseandlimitregisters define the logical address space
Types of Addresses used in a program Symbolic addresses The addresses used in a source code. The variable names, constants, and instruction labels are the basic elements of the symbolic address space. E.g. an integer variable called count. Relative addresses At the time of compilation, a compiler converts symbolic addresses into relative addresses. E.g. 10 bytes from the start of this module. Physical or absolute addresses The loader generates these addresses at the time when a program is loaded into main memory.
Binding of Instructions and Data to Memory Address binding of instructions and data to memory addresses can happen at three different stages Compile time: If memory location known a priori, absolute codecan be generated; must recompile code if starting location changes Load time: Must generate relocatable code if memory location is not known at compile time Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limitregisters)
Logical vs. Physical Address Space The concept of a logical address space that is bound to a separate physical address spaceis central to proper memory management Logical address– generated by the CPU; also referred to as virtual address Physical address– address seen by the memory unit
Memory-Management Unit (MMU) Hardware device (MMU) maps virtual to physical address In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory The user program deals with logical addresses; it never sees the real physical addresses
Dynamic Loading Routine is not loaded until it is called Better memory-space utilization; unused routine is never loaded Useful when large amounts of code are needed to handle infrequently occurring cases No special support from the operating system is required implemented through program design
Dynamic Linking Linking postponed until execution time Small piece of code, stub, used to locate the appropriate memory-resident library routine Stub replaces itself with the address of the routine, and executes the routine Operating system needed to check if routine is in processes’ memory address Dynamic linking is particularly useful for libraries System also known as shared libraries
Swapping Say we have Round Robin CPU scheduling and when the time quantum expires the scheduler calls the dispatcher. The dispatcher checks to see if the next process in the ready queue in in memory. If not and there is no free memory, the dispatcher swaps out a process currently in memory and swaps in the desired process from the backing store. Backing store– fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images Roll out, roll in– swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows)
Drawback of Swapping Context switch time is high in swapping. E.g. process size = 10 MB and backing store is a hard disk with a transfer rate of 40 MB per second. Transfer time = 10 MB / 40 MB/second = ¼ second = 250 milliseconds If there is an average latency time of 8 milliseconds. So swap time = 258 milliseconds. So total swap time = 258 + 258 = 516 milliseconds For efficient CPU utilization we need execution time much larger than swap time, i.e. time quantum >> 0.516 second Swapping is used in few systems. It requires too much swap time and provides too little execution time to be a reasonable memory management solution. In UNIX, swapping is normally disabled. It starts if many processes were running and using a threshold amount of memory.
Contiguous Allocation Main memory usually into two partitions: Resident operating system, usually held in low memory with interrupt vector User processes then held in high memory Relocation registers used to protect user processes from each other, and from changing operating-system code and data Base or relocation register contains value of smallest physical address Limit register contains range of logical addresses – each logical address must be less than the limit register When CPU scheduler selects a process for execution, the dispatcher loads these 2 registers as part of the context switch. Every address given by the CPU is checked against these registers thus protecting the O/S and other user programs and data from being modified by this process. MMU maps logical address dynamically
Contiguous Allocation (Cont) Multiple-partition allocation To allocate available memory to various processes waiting to be brought into memory Hole – block of available memory; holes of various size are scattered throughout memory When a process arrives, it is allocated memory from a hole large enough to accommodate it Operating system maintains information in a table about:a) allocated partitions b) free partitions (hole) OS OS OS OS process 5 process 5 process 5 process 5 process 9 process 9 process 8 process 10 process 2 process 2 process 2 process 2
Dynamic Storage-Allocation Problem First-fit: Allocate the first hole that is big enough Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size Produces the smallest leftover hole Worst-fit: Allocate the largest hole; must also search entire list Produces the largest leftover hole How to satisfy a request of size n from a list of free holes First-fit and best-fit better than worst-fit in terms of speed and storage utilization
Fragmentation External Fragmentation– total memory space exists to satisfy a request, but it is not contiguous. As processes are loaded and removed from memory, the free memory space is fragmented into a large # of small holes. E.g. in figure (a) total external fragmentation = 260 K (too small to satisfy P4 and P5). In figure (c) total external fragmentation = 300 K + 260 K = 560 K. This space is large enough to run P5 (which needs 500 K) but this free memory is not contiguous. Analysis has shown, for example, with first-fit about 1/3rd of memory may be unusable due to external fragmentation.
Fragmentation Internal Fragmentation– allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used. E.g. if we have a hole of 18,464 bytes and the next process request is for 18,462 bytes, we are left with a hole of 2 bytes. The overhead of keeping track of this small hole is greater than the hole itself. So the entire hole of 18,464 is allocated to the process. The difference between the allocated memory and requested memory (2 bytes) is internal fragmentation.
Fragmentation Reduce external fragmentation by compaction Shuffle memory contents to place all free memory together in one large block Compaction is possible only if relocation is dynamic, and is done at execution time
Compaction Example In (b) we moved 600K In (c) we moved 400K by moving job4 before job3 In (d) we moved 200K by moving job3 after job4 Selecting an optimal compaction strategy is quite difficult
Paging Logical address space of a process can be noncontiguous thus allowing a process to be allocated physical memory whenever the latter is available (this is another solution to external fragmentation.) Divide physical memory into fixed-sized blocks called frames(size is power of 2, between 512 bytes and 8,192 bytes) Divide logical memory into blocks of same size called pages Keep track of all free frames To run a program of size n pages, need to find n free frames and load program Set up a page table to translate logical to physical addresses Internal fragmentation
Paging Address generated by CPU is divided into: Page number (p)– used as an index into a pagetable which contains base address of each page in physical memory Page offset (d)– combined with base address to define the physical memory address that is sent to the memory unit For given logical address space 2m and page size2n page number page offset p d m - n n
Paging Page sizes are always a power of 2. Why? Selection of a power of 2 as a page size makes the translation of a logical address into a page# and its page offset easy. E.g. if logical address size is 2m and page size = 2n then the high-order m-n bits of a logical space designate the page# and the low order n bits designate the page offset. Say logical address = 8K = 8192 bytes, i.e. 213 = 8K and page size = 1K = 1024 bytes = 210 13 – 10 = 3 bits indicate the page# (23 = 8 pages, i.e. pages 0 – 7). page number page offset p d m - n n
Paging Example 32-byte memory and 4-byte pages
Paging There is no external fragmentation with paging since any free frame can be allocated to a process that needs it. There can be internal fragmentation if the memory requirements of a process do not fall on page boundaries, the last frame may not be completely full. E.g. page size = 2K = 2048 bytes and process size = 72,766 bytes. This needs 35 frames + 1086 bytes. So it is allocated 36 frames (i.e. 36 * 2048 = 73,728 bytes). So 73,728 -72,766 = 962 bytes wasted in internal fragmentation. Worst case is a process needing n frames + 1 byte. So it is allowed n+1 frames, wasting almost the entire frame. Average = ½ page (frame) per process is wasted. To reduce internal fragmentation page sizes need to be kept small but this leads to more overhead in the page table. Also disk I/O is more efficient with larger page sizes. So page sizes have become bigger. Now they are 4K – 16MB.
Free Frames After allocation Before allocation
Implementation of Page Table Page table is kept in main memory Page-table base register (PTBR)points to the page table Page-table length register (PRLR)indicates size of the page table In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction thus slowing memory access by a factor 2 (100% slow down). The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs)
TLBs TLB is a set of fast registers each containing a page# and a frame#. These registers contain only a few of the page-table entries. If the page# is found in the TLBs then its frame# is available immediately and used to access memory. This takes less than 10% longer than it would if an unmapped (without the use of a page table) memory reference were made. If the page# is not found in the TLBs, a reference to the page table must be made. The page# and frame# are added to the TLB. The TLB must be flushed at each context switch. The percentage of times that a page# is found in the TLB is called the hit ratio.
Effective Memory Access Time sa = time for searching TLB ma = time for memory access h = hit ratio Effective Memory Access Time = h (sa + ma) + (1-h) (sa + 2ma) = h sa + h ma + sa+ 2ma - h sa – 2 h ma = sa + 2ma – h ma = sa + (2 – h) ma Example: hit ratio = 80% and sa = 20 nsecs and ma= 100 nsecs Effective Memory Access Time = 20 + (2 – 0.8) 100 = 20 + (1.2) 100 = 140 nsecs. So there is a 40% slowdown (100 nsecs to 140 nsecs). If hit ratio = 98%, Effective Memory Access Time = 20 + (2 – 0.98) 100 = 122 nsecsor a 22% slowdown.
Memory Protection Memory protection implemented by associating protection bit with each frame Valid-invalidbit attached to each entry in the page table: “valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page “invalid” indicates that the page is not in the process’ logical address space and is an illegal access and is trapped as such.
Shared Pages Shared code One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers, window systems). Shared code must appear in same location in the logical address space of all processes Private code and data Each process keeps a separate copy of the code and data The pages for the private code and data can appear anywhere in the logical address space
Multilevel Paging Modern computer systems support a very large logical address space (232 to 264). Then the page table becomes too big to store. E.g. consider a system with a 32-bit logical address space. If the page size = 4 KB (212) then we have 232 / 212 = 220 =1 million pages. Say each entry in page table requires 4 bytes, then 4 MB of physical address space would be needed for the page table alone for each process. It is not advisable to allocate 4 MB contiguously in memory. So table is divided into smaller pieces. Also, by partitioning the page table we allow the OS to leave partitions unused till a process needs them.
Hierarchical Page Tables Break up the logical address space into multiple page tables A simple technique is a two-level page table
Effective Memory Access Time in Two-Level Paging Multilevel paging does not affect performance drastically. For a 2-level page table, converting a logical address into a physical one may take 3 memory accesses (1 access for outer table, one access for actual section which has the page, and 1 access for actual frame). Caching (TLBs) helps and performance remains reasonable. Effective Memory Access Time = h (sa + ma) + (1-h) (sa + 3ma) = h sa + h ma + sa+ 3ma - h sa – 3 h ma = sa + 3ma – 2h ma = sa + (3 – 2h) ma Effective Memory Access Time, if hit ratio = 98%, Effective Memory Access Time = 20 + (3 – 2(0.98)) 100 = = 20 + 1.04 (100) = 124 nsecs or a 24% slowdown.
Segmentation Memory-management scheme that supports user view of memory. A logical address space (user program) is a collection of segments. A segment is a logical unit such as: main program procedure function method object local variables, global variables common block stack symbol table arrays
Logical View of Segmentation 1 4 2 3 1 2 3 4 user space physical memory space