1 / 48

Styresystemer og Multiprogrammering Block 3, 2005

Styresystemer og Multiprogrammering Block 3, 2005. Memory Management: Main Memory Robert Glück. Today’s Plan. Binding programs to physical memory Logical / physical addressing Allocation: 1 process = 1 block Contiguous : blocks of variable size Allocation: 1 process = n blocks

teva
Download Presentation

Styresystemer og Multiprogrammering Block 3, 2005

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Styresystemer og MultiprogrammeringBlock 3, 2005 Memory Management: Main Memory Robert Glück

  2. Today’s Plan • Binding programs to physical memory • Logical / physical addressing • Allocation: 1 process = 1 block • Contiguous: blocks of variable size • Allocation: 1 process = n blocks • Paging: blocks of fixed size • Segmentation: blocks of variable size • Segmentation with paging: combination

  3. ? Process 1 Logical address space Process 3Logical address space Memory Physical address space Process 2 Logical address space Problem of Memory Management 17 17 17 • performance • protection • limited size • fragmentation • …

  4. Logical address space Physical address space Logical / Physical Address Logical address – generated by the CPU Physical address – seen by the Memory Unit • Basis for the memory management in most of today’s OS; mapping requires HW support. mapping 17 14385

  5. Hardware Support: Dynamic Relocation register loaded by context switch, each process has a different value

  6. CPU MMU cache system bus memory I/O bridge I/O bus I/O controller disk I/O controller network card I/O controller screen System Architecture

  7. Memory Management Unit (MMU) • Hardware device: maps logical addresses to physical addresses. • Example: relocation register, limit register • Historical Notes: • Motorola 68000 family had MMU with 68030 and later (1986 - …) • Intel x86 family introduced MMU for pagingwith 80386 (1985 - …) • Remark: • Embedded systems: often CPU w/o MMU.

  8. Binding Times? Symbolic locations in programs (x, y, … ) can be bound to physical locations in memory at three stages: • Compile Time: if memory location is known, absolute code can be generated by compiler; must recompile code if starting location changes. • Load Time: compiler must generate relocatable code; loader transforms code to use absolute addresses. • Execution Time: binding delayed until run time; need hardware support for mapping between logical addresses and physical addresses.

  9. Physical Memory is Limited • Dynamic loading –Routine not loaded into memory until it is called (error routine) • Dynamic linking – Routine not linked with program until it is called (shared library) • Swapping – Process swapped out of memory and brought back into memory for execution (more processes than fit into physical memory)

  10. Contiguous-Memory Allocation • Characteristics: • 1 process = 1 block • blocks of variable size • Dynamic Storage-Allocation • strategies: first-fit, best-fit, worst-fit • Fragmentation Problem • external fragmentation • compaction of fragments • Memory Protection • limit and relocation register

  11. OS OS OS OS OS process 5 process 5 process 5 process 5 process 5 process 9 process 9 process 8 process 10 process 10 process 2 process 2 process 2 process 2 process 2 Dynamic Storage Allocation • When a process arrives, it is allocated memory from a block large enough to accommodate it. • OS maintains information:a) allocated blocks b) free blocks • Blocks with various sizes are scattered in memory. process 11

  12. Dynamic Storage Allocation Strategies: • First-fit: Allocate the firstblock that is big enough; can stop search as soon as a block is found. • Best-fit: Allocate the smallestblock that is big enough; must search entire list, unless list ordered by size. Produces the smallest leftover block. • Worst-fit: Allocate the largestblock; must also search entire list. Produces largest leftover block. • Performance: • First-fit is generally fastest. • First-fit and best-fit better than worst-fit in terms of speed and storage utilization.

  13. External Fragmentation • External Fragmentation – total memory space exists to satisfy request, but not contiguous. • Compaction – shuffle memory contents to place all free memory together in one large block. • possible only if relocation is dynamic and can be done at execution time. • I/O problem: devices often use physical addresses. • Statistic analysis of first-fit: 1/3 of memory may be unusable due to external fragmentation.

  14. process Internal Fragmentation • Internal Fragmentation: for reasons of efficiency, memory blocks are often allocated in fixed units(4, 8, 12, ... KB) • A process requests 7KB,we are left with 1KB internal fragmentation! • Fragments cannot be reclaimed by compaction

  15. Contiguous-Memory Allocation (cont’d) • Location of System and User Processes: • Resident operating system: usually held in low memory with interrupt vector. • User processes: usually held in high memory. • Memory Protection: • Important: protect user processes from each other, and from changing operating-system code and data. • limit register: range of logical addresses. • relocation register: value of physical start address. • each context-switch updates registers

  16. Memory Protection registers loaded by context switch

  17. OS process 5:3 process 5:1 process 5:2 frames pages Paging process 5:1 process 5 process 5:2 process 5:3 contiguous memory

  18. Paging • Characteristics: • 1 process = n blocks • block of fixed size • Page: block in logical memory • Frame: block in physical memory(typically 512 bytes to 16 Kbytes) • Page table: translates logical to physical address • Free-frame list: keeps track of free frames.

  19. page number offset frame number offset Paging: Address Translation • Logical addressgenerated by CPU: • Page number: index into a pagetable which contains base address of each page in physical memory. • Page offset: added to base address; defines the physical memory address sent to the memory unit. • Physical address: found by translating page number into frame address and adding offset:

  20. Paging Architecture

  21. 5 3 6 2 5 3 6 2 Example: Page Allocation 0 page 0 1 0 page 1 page 3 2 1 page 2 page 1 2 3 page 3 3 4 logical memory page table page 0 5 page 2 6 1 physical memory free-frame list

  22. Where is the Page Table? • Page table too large to keep in CPU registers;thus, keep in main memory. • CPU keeps track of location for each process: • Page-tablebase register: points to the page table. • Page-table length register: size of the page table. • Every data/instruction access requirestwo memory accesses: 1. access page table for lookup 2. access frame to get data/instruction • Hardware support: fast-lookup cacheTranslation Look-aside Buffer (TLB)

  23. associative, high-speed memory, 64 - 1024 entries Paging Architecture with TLB up to 1M entries

  24. Access Time with TLB? Example: • TLB lookup: 20 nanosec • Memory access: 100 nanosec • Hit ratio: 98% (typically) • Effective Access Time = (20 + 100)*0,98 + (20 + 100 + 100)*0,02 = 122 nanosec (22% more expensive)

  25. Memory Protection • Memory protection implemented by associating protection bits with each frame in the page table. • Valid-invalid bit: • valid: page in the process’ logical address space. • invalid: page not in the process’ logical address space. • Permission bits: • page is read-only • page is execute-only • page is read/write

  26. frame number valid-invalid 0 5 v 0 1 3 v page 0 1 page 3 2 6 v page 1 2 page 1 3 2 v page 2 3 4 7 v page 3 4 page 0 5 0 i page 4 5 page 2 6 0 i 6 logical memory page 4 7 0 i 7 physical memory page table Paging with Status Bit

  27. Size of Page Table? Example: • Logical addressing: 32 bits • Logical memory: 4 GB (232) • Page size: 4 KB (212) • Number of pages: 1 M (232 / 212) • Page table size: 4 MB (each entry 4 bytes) Each process may need a table with up to 4 MB contiguous physical memory.  Divide page table into smaller pieces.

  28. Page Table Structures • Hierarchical Paging • paging the page table • 2-level paging support (Pentium II) • 3-level paging support (SPARC) • 4-level paging support (Motorola 68030) • inappropriate for 64-bit architectures • Hashed Page Tables • when address space larger than 32 bits • Inverted Page Tables • 64-bit UltraSPARC, PowerPC

  29. p1 p2 level 1 page table level 2 page table f Two-Level Paging: Address Translation Logical address (32 bits): p1 p2 offset 10 10 12 frame in physical memory

  30. Two-Level Page Table (Pentium II)

  31. Hashed Page Table • In architectures with 64 bit address space, hierarchical page tables become unpractical. • 8 KB pages have 5 levels • Instead: hashed page table • The page number is hashed into a page table. • Each entry is a chain of elements hashing to the same location. • Page numbers are compared in this chain searching for a match. If a match is found, the corresponding frame number is extracted.

  32. linked list Hashed Page Table

  33. Inverted Page Table • Another solution to the size problem: one entry for each frame in physical memory. • Each entry consists of the logical address ofthe page stored in that frame, with information about the process that owns that page. • Page access: • Linear search (slow), or • Hash table over frames to limit search to one – or at most few – page-table entries. • Decreases memory: only 1 page table for all processes; but increases access time: need to search the inverted page table

  34. Inverted Page Table linear search or hash table

  35. Shared Pages • Shared code: • 1 copy of read-only code shared among processes (i.e., editors, browsers, compilers). • Shared code must appear in same locationin the logical address space of all sharing processes. • Private code & data: • Each process keeps a separate copy of private code and data. • The pages for the private code and data can appear anywhere in the logical address space.

  36. data1 lib 1 3 lib 3 lib 1 1 lib 2 6 3 data2 lib 2 lib 3 1 6 lib 1 lib 3 3 0 data 1 a1.1 4 data 2 app 1 11 a2.3 12 app 1 app 2 9 lib 2 app 2 6 Process 1 5 a3.1 lib 1 app 3 3 data3 lib 2 Process 2 6 a2.2 lib 3 1 data 8 a1.2 app 1 7 a2.1 Process 3 Shared Pages Example same page number for shared pages, read-only access

  37. Example of Segmentation main program array main findmin findmin stack array stack user’s view physical memory

  38. Segmentation • Characteristics: • 1 process = n blocks • block of variable size • Segment: a logical unit in a program - main program, - procedure, function, object, - common block, - array, stack, ... • Normally, compiler arranges segments. • Dynamic storage allocation; fragmentation.

  39. segment number offset Segmentation: Address Translation • Logical address: • Segment table: maps logical address into physical address; each table entry has • base: physical start address of segment • limit: length of segment • status bits: validity, access permissions

  40. Segmentation Architecture

  41. Memory Protection • Each entry in segment table: • validation bit = 0  illegal segment • access privileges: read / write / execute • Example: segment contains • code: execution-only • constants: read-only • array: read & write permission • Example: each array in its own segment • automatic check that array indices are legal

  42. shared segments needsame segment number jump [0,47] jump [0,47] Sharing of Segments

  43. Segmentation with Paging • Segment table:contains the base address of a pagetable for a segment, not the base address of the segment. • Paging each segment:reduces problem of external fragmentation.

  44. 1 table Process: 16K segments (214) each segment max. 4GB (232) 13 32 12 10 10 Number of entries: 1024 (210) Page table size: 4KB (210* 4 bytes) i386 Segmentation w/Two-level Paging Frame size: 4KB (212)

  45. Considerations for Memory Management Strategies (1/2) • Hardware Support / Performance • base and limit registers • cache for page entries • associative memory, … • strategies cannot be implemented efficiently by SW • Fragmentation / Utilization of Memory: • internal: fixed block size (when paging) • external: variable block size (when segmentation) • Relocation: • requires logical address be relocatable at run time • compaction: shuffle programs in memory • pack more processes into available memory

  46. Considerations for Memory Management Strategies (2/2) • Swapping: • dictated by CPU scheduling: allows more processesto run than can be fit into physical memory • Sharing: • share code & data among different users • requires paging or segmentation, dynamic linking • Protection: • guard against programming errors / attacks • necessary when sharing code and data • requires that sections of user program are execute-only, read-only, read-write

  47. Summary • Contiguous Memory (1 block, variable size) • Paging (n blocks, fixed size) • Translation Look-aside Buffer (TLB) • Hierarchical page tables • Hash-based page tables • Inverted page tables • Segmentation (n blocks, variable size) • Segmentation with Paging (i386, Pentium)

  48. Source • These slides are based on SGG04 and the slides provided by the authors.

More Related