1 / 77

Chapter 9 Memory Management

Chapter 9 Memory Management. 記憶體管理. Memory Management. Background Logical versus Physical Address Space Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging. Background. § 9.1.

rachele
Download Presentation

Chapter 9 Memory Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 9 Memory Management 記憶體管理

  2. Memory Management • Background • Logical versus Physical Address Space • Swapping • Contiguous Allocation • Paging • Segmentation • Segmentation with Paging

  3. Background § 9.1 • Program must be brought into memory and placed within a process for it to be executed. • Input queue– collection of processes on the disk that are waiting to be brought into memory for execution. • User programs go through several steps before being executed. 輸入佇列

  4. Each binding is a mapping from one address space to another 綁定 bind relocatable addresses to absolute addresses source program Multistep Processing of a user program Addresses in the source program are generally symbolic (such as count) compiler or assembler compile time Compiler will bind symbolic addresses to relocatable addresses (such as “14 bytes from the beginning) object module other object modules linkage editor load module load time system library loader dynamically loaded system library in-memory binary memory image execution time (run time)

  5. 綁定 Binding of Instructions and Data to Memory § 9.1.1 Address binding of instructions and data to memory addresses canhappen at three different stages. • Compile time: If it is known at compile time where the process will reside in memory, absolute code can be generated; must recompile code if starting location changes. (MS-DOS .com format) • Load time: If memory location is not known at compile time,Compiler must generate relocatable code. The final biding is delayed until load time. • Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers). 可重置

  6. Binding of Instructions and Data to Memory § 9.1.1 Address binding of instructions and data to memory addresses canhappen at three different stages. • Compile time: If it is known at compile time where the process will reside in memory, absolute code can be generated; must recompile code if starting location changes. (MS-DOS .com format) • Load time: If memory location is not known at compile time,Compiler must generate relocatable code. The final biding is delayed until load time. • Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers). True-False Question: ( ) It is necessary to determine where the program will reside in memory before loading, otherwise the program will not be able to execute because it does not know where to locate the program. Answer: ×

  7. Logical vs. Physical Address Space § 9.1.2 • Logical address (virtual address) – generated by the CPU • Physical address – address seen by the memory unit • The logical and physical addresses are the same in compile-time and load-time address binding but different in execution-time address binding.

  8. Logical vs. Physical Address Space § 9.1.2 True-False Question: ( ) Logical address can be used to learn where the program will be running in memory before execution. • Logical address (virtual address) – generated by the CPU • Physical address – address seen by the memory unit • The logical and physical addresses are the same in compile-time and load-time address binding but different in execution-time address binding. Answer: ×

  9. Memory Management Unit (MMU) • The run-time mapping from virtual to physical addresses is done by the memory-management unit (MMU), which is a hardware device. • The value in the relocation register is added to every address generated by a user process at the time it is sent to memory. • The user program never sees the real physical addresses. 重置暫存器 memory relocation register 14000 logical address 346 physical address 14346 CPU + MMU

  10. Dynamic Loading § 9.1.3 動態載入 • Routine is not loaded until it is called • Use dynamic loading to obtain better memory-space utilization; unused routine is never loaded. • Useful when large amounts of code are needed to handle infrequently occurring cases, such as error routines. • Although the total program size may be large, the portion that is actually used (loaded) may be much smaller. • No special support from the operating system is required implemented through program design.

  11. Dynamic Linking § 9.1.4 (記號) A small piece of code • In static linking, system language libraries are treated like any other object module and are combined by the loader into all program’s binary image, wasting disk space and main memory. • In dynamic linking, linking postponed until execution time. A stub is included in the image for each library-routine reference. • The stub indicates how to locate the memory-resident library routine, or how to load the library if the routine is not already present. • Stub replaces itself with the address of the routine, and executes the routine. • All processes that use a language library execute only one copy of the library code.

  12. Version • With dynamic linking, programs reference the new version of the library automatically when the library was updated. • Since more than one version of a library may be loaded into memory, the version information is included both in the program and the library. Only the programs compiled with new library are affected…called shared library.

  13. Pass 1 70K Pass 2 80K Symbol table 20K Common routines 30K total 200K (only 150K available) Overlay A Overlay B Overlays § 9.1.5 重疊 • Keep in memory only those instructions and data that are needed at any given time. • Needed when process is larger than amount of memory allocated to it. • When instruction not in memory are needed, they are loaded into space that was occupied previously by no longer needed instructions.

  14. Overlays symbol table 20K common routines 30K overlay driver pass 1 pass 2 10K 80K 70K Fig. 9.3 Overlays for a two-pass assembler.

  15. Swapping § 9.2 置換 • A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution. • Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images. • Roll out, roll in– swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed. • If the binding is done at assembly or load time, the process will be swapped back to the same location. If execution time binding is used, then it is possible to swap into different memory space. 換出、換入

  16. Swapping § 9.2 True-False Question: ( ) If the binding is performed at execution time, the program will be swapped back to the same location where the program was executed last time. • A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution. • Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images. • Roll out, roll in– swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed. • If the binding is done at assembly or load time, the process will be swapped back to the same location. If execution time binding is used, then it is possible to swap into different memory space. Answer: ×

  17. Schematic View of Swapping

  18. Swapping • Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped. • Useful to know exactly how much memory a user process is using, then swap only what is actually used, reducing swap time. For this to work, need to inform system any changes in memory requirements. • A process must be completely idle before swapping, particularly with pending I/O. We may (1) never to swap a process with pending I/O, or (2) to execute I/O only into OS buffers. • Modified versions of swapping are found on many systems, i.e., UNIX and Microsoft Windows.

  19. Contiguous Allocation § 9.3 連續配置 • Main memory usually divided into two partitions: • Resident operating system, usually held in low memory with interrupt vector. • User processes then held in high memory. • Relocation-register scheme used to protect user processes from each other, and from changing operating-system code and data. • Relocation register contains value of smallest physical address; limit register contains range of logical addresses –each logical address must be less than the limit register. • The relocation-register scheme provides an effective way to allow the OS size to change dynamically. 界限暫存器

  20. Contiguous Allocation memory relocation register limit register logical address physical address CPU yes < + no trap; addressing error Fig. 9.5 Hardware support for relocation and limit registers.

  21. Contiguous Allocation • Simplest memory allocation: divide memory into a number of fixed-sized partitions. • Each partition may contain exactly one process. • The degree of multiprogramming is bound by the number of partitions. • Used by IBM OS/360 分割

  22. Contiguous Allocation True-False Question: ( ) Multiprogramming is impossible in fixed-sized partition because each process may require different sized memory space. • Simplest memory allocation: divide memory into a number of fixed-sized partitions. • Each partition may contain exactly one process. • The degree of multiprogramming is bound by the number of partitions. • Used by IBM OS/360 Answer: ×

  23. Contiguous Allocation (Cont.) • Hole– block of available memory; holes of various size are scattered throughout memory. • When a process arrives, it is allocated memory from a hole large enough to accommodate it. • Operating system maintains information about:a) allocated partitions b) free partitions (hole) OS OS OS OS process 5 process 5 process 5 process 5 process 9 process 9 process 8 process 10 process 2 process 2 process 2 process 2

  24. Contiguous Allocation (Cont.) • If the hole is too large for arriving process, it is split into two: one part is allocated to the arriving process; the other is returned to the set of holes. • When a process terminates, the holes are placed back in the set of holes. • If the new hole is adjacent to other holes, merge to form one larger hole and check if it can satisfy any process waiting for memory.

  25. Dynamic Storage Allocation Problem …How to satisfy a request of size n from a list of free holes. • First-fit: Allocate the first hole that is big enough. • Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size. Produces the smallest leftover hole. • Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest leftover hole. First-fit and best-fit better than worst-fit in terms of speed and storage utilization.

  26. Dynamic Storage-Allocation Problem Multiple Choices Question: ( ) Which of the following strategies does not need to search the entire list of free holes before applying it?(a) first fit (b) best fit (c) worst fit (d) better fit …How to satisfy a request of size n from a list of free holes. • First-fit: Allocate the first hole that is big enough. • Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size. Produces the smallest leftover hole. • Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest leftover hole. Answer: a First-fit and best-fit better than worst-fit in terms of speed and storage utilization.

  27. Fragmentation 斷裂片段 • External fragmentation– When free memory space is broken into little pieces, although total memory space exists to satisfy a request, but it is not contiguous. • 50-percent rule: given N allocated blocks, another 0.5N blocks will be lost due to fragmentation. • Internal fragmentation– allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used.

  28. Fragmentation • Reduce external fragmentation by compaction • Shuffle memory contents to place all free memory together in one large block. • Compaction is possible only if relocation is dynamic, and is done at execution time. 壓緊

  29. Paging § 9.4 分頁 • Logical address space of a process can be noncontiguous. • Divide physical memory into fixed-sized blocks called frames (size is power of 2, between 512 bytes and 8192 bytes). • Divide logical memory into blocks of same size called pages. • Keep track of all free frames. • To run a program of size n pages, need to find n free frames and load program. • Set up a page table to translate logical to physical addresses. • Internal fragmentation. 框

  30. Address Translation Scheme 分頁表 • Address generated by CPU is divided into: • Page number(p)– used as an index into a pagetable which contains base address of each page in physical memory. • Page offset(d)– combined with base address to define the physical memory address that is sent to the memory unit. 位移 page number page offset p d m - n n Size of logical address space: 2m Size of one page: 2n the high-order m - n bits: page number n low-order bits designate the page offset.

  31. Address Translation Architecture

  32. Paging Example

  33. Paging Example 0481216202428 Ij k l 0 a 1 b 2 c 3 d 4 e 5 f 6 g 7 h 8 i 9 j 10 k 11 l 12 m 13 n 14 o 15 p mn o p 5 6 1 2 0123 page table a b c d e f g h logical memory physical memory

  34. Suggest small page sizes • When we use paging, no external fragmentation: Any free frame can be allocated to a process that needs it. • But may have internal fragmentation: If a process do not fall on page boundaries, the last frame allocated may not be completely full. • Average internal fragmentation : one-half page per process. • However, overhead involved in each page-table entry. • Generally, page sizes getting larger as processes, data sets, and main memory have become larger. • Today, pages typically are between 2 and 8 KB. Suggest larger page sizes

  35. Suggest small page sizes True-False Question: ( ) Although it is impossible for paging scheme to generate any external fragmentation, the internal fragmentation is still inevitable. • When we use paging, no external fragmentation: Any free frame can be allocated to a process that needs it. • But may have internal fragmentation: If a process do not fall on page boundaries, the last frame allocated may not be completely full. • Average internal fragmentation : one-half page per process. • However, overhead involved in each page-table entry. • Generally, page sizes getting larger as processes, data sets, and main memory have become larger. • Today, pages typically are between 2 and 8 KB. Suggest larger page sizes Answer: o

  36. User’s view and physical view • User program views memory as one single contiguous space. In fact, the user program is scattered throughout physical memory. • OS manage a frame table, recording the allocation details of physical memory. • Each entry of the frame table indicating if a page frame is free or allocated and to which page.

  37. Physical View User’s View free frame list 14 13 18 20 15 free frame list 15 13 page1 13 14 page0 14 15 15 page 0 page 1 page 2 page 3 16 page 0 page 1 page 2 page 3 16 17 17 18 page2 18 new process new process 19 19 20 page3 20 0 1 2 3 14 13 18 20 21 21 new process page table Fig. 9.9 Free frames. (a) Before allocation. (b) After allocation

  38. Structure of the Page Table § 9.4.2 • Page table can be implemented as a set of registers with very high-speed logic. • For large page table, more feasible if keep it in main memory and use a page-tablebase register (PTBR) points to the page table and page-table length register (PRLR) indicates size of the page table. • In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction. • This problem can be solved by the use of a special fast-lookup hardware cache called associative registers or translation look-aside buffers(TLBs)

  39. Structure of the Page Table § 9.4.2 Short Answer Question: Please explain why it requires two memory accesses for every instruction access if the page table is placed in main memory. • Page table can be implemented as a set of registers with very high-speed logic. • For large page table, more feasible if keep it in main memory and use a page-tablebase register (PTBR) points to the page table and page-table length register (PRLR) indicates size of the page table. • In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction. • This problem can be solved by the use of a special fast-lookup hardware cache called associative registers or translation look-aside buffers(TLBs)

  40. Associative Register 相關暫存器 • Associative registers – When presented with an item, it is compared with all keys simultaneously. • If the page number is found in the associative registers, its frame number is immediately available and used to access memory. • If the page number is not in the associative registers, a memory reference to the page table must be made. • Associative register is updated for next reference Page # Frame #

  41. Page # Frame # 0 14 20 3 13 page0 14 associative register 15 page 0 page 1 page 2 page 3page 4page 5 16 17 0 1 2 3 4 5 14 18 disk address disk address 19 20 disk address new process page3 20 disk address 21 page table

  42. True-False Question: ( ) When referencing the memory, the page table should be searched first and the associative registers next. Page # Frame # 0 14 20 3 13 page0 14 associative register 15 page 0 page 1 page 2 page 3page 4page 5 16 17 0 1 2 3 4 5 14 18 disk address disk address 19 Answer: × 20 disk address new process page3 20 disk address 21 page table

  43. True-False Question: ( ) With powerful and effective associative registers, the page table has no use and can be removed from the system architecture. Page # Frame # 0 14 20 3 13 page0 14 associative register 15 page 0 page 1 page 2 page 3page 4page 5 16 17 0 1 2 3 4 5 14 18 disk address disk address 19 Answer: × 20 disk address new process page3 20 disk address 21 page table

  44. Effective Access Time 有效存取時間 • Associative Lookup =  time unit • Assume memory cycle time is 1 microsecond • Hit ration – percentage of times that a page number is found in the associative registers • Hit ratio =  • Effective Access Time (EAT) EAT = (1 + )  + (2 + )(1 – ) = 2 +  –  • The hit ratio is related to the number of associative registers. • Motorola 68030 (for Mac.): 22 entry TLB.Intel 80486: 32 entry with 98% hit ratio.UltraSPARC I & II: two TLBs for instruction and data separately. Each is 64 entries long.

  45. Memory Protection • Memory protection implemented by associating protection bit with each frame. • One bit can define a page to be read&write or read only. It also can be expanded to provide finer level of protection. • Valid-invalid bit attached to each entry in the page table: • “valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page. • “invalid” indicates that the page is not in the process’ logical address space. • Illegal addresses are trapped.

  46. page 4 page 5 10468 page 5 12287 Internal fragmentation Memory Protection valid-invalid bit frame number 00000 page 0 page 1 page 2 page 3 page 4 10468 page 5 12287 page table Fig. 9.11 Valid (v) or invalid (i) bit in a page table.

  47. Multilevel Paging § 9.4.3 多層分頁 • Modern computer support large address space and the page table becomes large. • 32bit address space with 4K bytes page size requires 4 megabytes page table for each process. • Simple solution: divide the page table into smaller pieces. • Two-level paging algorithm: the page table itself is also paged.

  48. Two-level Paging

  49. Two-Level Paging Example • A logical address (on 32-bit machine with 4K page size) is divided into: • a page number consisting of 20 bits. • a page offset consisting of 12 bits. • Since the page table is paged, the page number is further divided into: • a 10-bit page number. • a 10-bit page offset. • Thus, a logical address is as follows:where p1 is an index into the outer page table, and p2 is the displacement within the page of the outer page table. page offset page number p1 p2 d 10 10 12

  50. Address-Translation Scheme • Address-translation scheme for a two-level 32-bit paging architecture logical address p1 p2 d p1 p2 outer-page table d page of page table

More Related