1 / 20

Memory Management

Memory Management. three levels. hardware caches to speed up concurrent access from treads, cores, multiple cpu’s programming (language dependent) malloc, free new…, garbage collection OS number of programs in memory swap to/from disk size of programs. OS Memory Management.

isonm
Download Presentation

Memory Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Memory Management

  2. three levels • hardware • caches to speed up • concurrent access from treads, cores, multiple cpu’s • programming (language dependent) • malloc, free • new…, garbage collection • OS • number of programs in memory • swap to/from disk • size of programs

  3. OS Memory Management reference to data, e.g. linked list • many programs and the OS (partly) in memory, ready to run when needed or possible • efficient use of the cpu and the available memory

  4. Requirements • Relocation: adjustment of references to memory when program is (re)located in memory • Protection: against reading or writing of memory locations by another processes, during execution • Sharing: data and code (libraries), communication between processes • Logical Organization: programs are written in modules, compiled independently; different degrees of protection • Physical Organization: memory available for a program plus its data may be insufficient; movements to and from disks

  5. Loading of programs Relocatable loading: address relative to fixed point (begin of program) in load module list of these addresses (relocation dictionary) loader adds (load address – fixed point) to those addresses Swapping only possible if program returns to same position.

  6. Dynamic Run Time loading Done in the hardware Can also provide protection Relative address is an example of a “logical” address, which is independent on the place of the program in physical memory

  7. Fixed partitioning • The part of the memory not used by the operating system is divided in parts of equal or different length. • Disadvantage of partitions of equal size: • if the program is too big for the chosen size, the programmer must work with “overlays”. • small programs still use the full partition: “internal fragmentation” of memory • Advantage: • placing of a new program in memory is easy: take any free partition

  8. Variable sized partitions • Number and sizes of partitions are fixed during system generation • There may be day/night variation: more small partitions during the day for testing; more large partitions during night for production. • two choices for loading new programs Maximizes number of loaded programs Minimizes internal fragmentation

  9. Dynamic partitioning • Each process gets exactly (ceiled to 1, 2 or 4KB) the memory it needs. • Number and size of the partitions are variable • Gives “external fragmentation” • If that gets too big: use “compaction”: • stop all processes • move them in memory, all free space at the end • need algorithm for placement of processes

  10. Paging • Partition memory into small equal-size chunks (frames) and divide each process into the same size chunks (pages) • Operating system maintains a page table for each process: • contains the frame location for each page in the process • memory address consists of a page number and offset • A special hardware register points during execution to the page table of the executing process. • An extra read access to memory is thus needed, caching (in the CPU) can be used to speed it up. • Contiguous frames in memory are not necessary; as long as there are more free frames than pages in a process, the process can be loaded and executed. • No external fragmentation. • Little internal fragmentation (only in the last page of a process)

  11. Process page table • process maximal 64 pages of 1KB each • physical memory maximal 64 frames of 1KB • Add 2 bits to the physical address and to the page tables: • there are now 256 pages of 1 KB • a process can still have maximal 64 pages, be 64 KB long • there can be more processes in memory

  12. Segmentation • Process is divided into a number of segments, which can be of different size; there is a maximal size. • For each process there is a table with the starting address of each segment; segments can be non-contiguous in memory. • Segmentation is often visible for the programmer which can place functions and data blocks into certain segments. • No internal fragmentation, only external • Placement algorithm is needed • Base address can be longer, to use more physical memory • Tables are larger than with paging; more hardware support needed.

  13. Virtual memory basis • Two properties of simple paging and segmentation: • all memory references in a process are logical addresses which during execution are dynamically converted by hardware to physical addresses • a process can be divided in parts (pages or segments) which do not have to be in contiguous physical memory during execution • are the basis for a fundamental breakthrough: • not all the pages or segments of a process have to be in memory during execution • as long as the next instruction and data items needed by it are in physical memory, the execution can proceed. • if that is not the case (page or segment fault) those pages (or segments) must be loaded before execution can proceed • not used pages or segments can be swapped to disk.

  14. VM • Implications: • more processes can be in memory: better usage of the CPU, better response times for interactive users • a process can be larger than the available physical memory • This gives the name “virtual memory”, available on the swap disk, not limited by the “real memory”. • VM can be based on paging, segmentation, or both. • Needed for virtual memory systems: • hardware (address translation, usage bits, caches for speed) • management software (tables, disk I/O, algorithms) • VM now used on mainframes, workstations, PC’s, etc. • Not for some “real time” systems as the execution time of processes is less predictable.

  15. Thrashing, locality principle • Thrashing • Swapping out a piece of a process just before that piece is needed • The processor spends most of its time swapping pieces rather than executing user instructions • Principle of locality • Program and data references within a process tend to cluster • Only a few pieces of a process will be needed over a short period of time • Possible to make intelligent guesses about which pieces will be needed in the future • This suggests that virtual memory may work efficiently • provided programmer and compiler care about locality

  16. Page tables in VM • Control bits: • P(resent): page in memory or on disk • M(odified): page modified or not • time indication of last use • Two level, hierarchical page table • Part of it can be on disk

  17. Translation Lookaside Buffer • Contains page table entries that have been most recently used • Functions same way as a memory cache • Given a virtual address, processor examines the TLB • If page table entry is present (a hit), the frame number is retrieved and the real address is formed • If page table entry is not found in the TLB (a miss), the page number is used to index the process page table • First checks if page is already in main memory; if not in main memory a page fault is issued • The TLB is updated to include the new page entry

  18. Use of TLB

  19. Combined segmentation and paging • Paging is transparent to the programmer • Paging eliminates external fragmentation • Segmentation is visible to the programmer (and compiler) • allows for growing data structures, modularity, and support for sharing and protection • embedded systems: program in ROM, data in RAM

  20. OS policies • Fetch: when a page should be brought into memory? • on-demand paging, only when needed • pre-paging, bring in more, try to reduce page faults • Replacement: which page is replaced? • Frame Locking, to increase efficiency • Kernel of the operating system, Control structures, I/O buffers • Resident Set Size: how many pages per process • fixed allocation • variable allocation, global or local scope • Cleaning: when to re-use a frame for another page • on demand or pre-cleaning • Load control: number of processes resident in main memory • Process suspension: scheduling policy

More Related