1 / 30

Memory Management

Memory Management. CSE451 Andrew Whitaker. Big Picture. The OS is responsible for resource management Up till now, we’ve focused on how multiple programs share the CPU Now: how do multiple programs share memory?. Firefox. Word. javac. Goals of Memory Management.

elivingston
Download Presentation

Memory Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Memory Management CSE451 Andrew Whitaker

  2. Big Picture • The OS is responsible for resource management • Up till now, we’ve focused on how multiple programs share the CPU • Now: how do multiple programs share memory? Firefox Word javac

  3. Goals of Memory Management • Provide a convenient foundation for programmers, compilers, linkers • Hide memory sharing • Via virtual addressing • Hide memory scarcity • Via virtual memory • Provide isolation between processes

  4. Firefox Word javac PacMan PacMan Firefox Word javac Problem: Memory Relocation • A program 3 MB 0 MB

  5. Virtual Addressing • Programmer sees a virtual address space, which always starts at zero • Hardware/OS map virtual addresses to physical 1MB 1MB 0MB 0MB 1MB 0MB Virtual Addresses Firefox Word PacMan Physical Addresses PacMan Firefox Word

  6. Memory Management: Factors to Consider • Workload characteristics • Batch systems, Internet systems, interactive systems • Level of hardware support Base & Limit registers No HW support Paging, Segmentation, TLBs

  7. Strategy #1: Swapping • Only one program occupies memory at once • On a context switch: • Save old program’s memory to disk • Load next program’s memory from disk • Either eagerly or lazily • Advantage: very simple • Disadvantage: disk is exceedingly slow!

  8. Why Use Swapping? • No Hardware Support • MIT’s CTSS operating system (1961) was built this way • Memory was so small that only one job would fit! • Long-lived batch jobs • Job switching costs become insignificant as the quantum grows large

  9. How Slow is Swapping?

  10. Strategy #2: Fixed partitions • Physical memory is broken up into fixed partitions • All partitions are the same size • Partitions never change • Advantages • Simple

  11. Disadvantages of Fixed Partitions • Internal fragmentation: memory in a partition not used by its owning process isn’t available to other processes • Partition size problem: no one size is appropriate for all processes • Tradeoff between fragmentation and accommodating large programs

  12. Implementing Fixed Partitions Requires hardware-supported base and limit registers physical memory 0 partition 0 limit register base register 2K 2K P3’s base: 6K partition 1 4K partition 2 6K yes partition 3 <? + offset 8K virtual address partition 4 no 10K raise protection fault partition 5

  13. Strategy #3: Variable Partitions • Obvious next step: physical memory is broken up into variable-sized partitions • Advantage: no internal fragmentation • Simply allocate partition size to be just big enough for process • Problems • We must know in advance the program size • External fragmentation • As we load and unload jobs, holes are left scattered throughout physical memory

  14. Mechanics of Variable Partitions physical memory partition 0 limit register base register P3’s size P3’s base partition 1 partition 2 yes <? + offset partition 3 virtual address no partition 4 raise protection fault

  15. Dealing with External Fragmentation • Swap a program out • Re-load it, adjacent to another • Adjust its base register • “Lather, rinse, repeat” • Ugh partition 0 partition 0 partition 1 partition 1 partition 2 partition 2 partition 3 partition 4 partition 3 partition 4

  16. Strategy #4: Paging • Use fixed-size units of memory (called pages) for both virtual and physical memory virtual address space physical address space page 0 frame 0 page 1 frame 1 page 2 frame 2 page 3 … … frame Y page X

  17. Paging Advantages • Paging reduces internal fragmentation • Paging eliminates external fragmentation • Disadvantages?

  18. Address translation • A virtual address has two parts: virtual page number and offset • Address translation only applies to the virtual page number • Virtual page number (VPN) is index into a page table • Page table entry contains page frame number (PFN) • Physical address is PFN::offset virtual page # offset

  19. Page Tables • Managed by the OS • Map a virtual page number (VPN) to a page frame number (PFN) • VPN is simply an index into the page table • One page table entry (PTE) per page in virtual address space • i.e., one PTE per VPN

  20. Mechanics of address translation virtual address virtual page # offset physical memory page frame 0 page table page frame 1 physical address page frame 2 page frame # page frame # offset page frame 3 … page frame Y

  21. Example of address translation • Assume 32 bit addresses • assume page size is 4KB (4096 bytes, or 212 bytes) • VPN is 20 bits long (220 VPNs), offset is 12 bits long • Let’s translate virtual address 0x13325328 • VPN is 0x13325, and offset is 0x328 • assume page table entry 0x13325 contains value 0x03004 • page frame number is 0x03004 • VPN 0x13325maps to PFN 0x03004 • physical address = PFN::offset = 0x03004328

  22. Page Table Entries (PTEs) 1 1 1 2 20 V R M prot page frame number • PTE’s control address mappings • The page frame number indicates the physical page • The valid bit says whether or not the PTE can be used • says whether or not a virtual address is valid • The referenced bit says whether the page has been accessed recently • The modified bit says whether or not the page is dirty • it is set when a write to the page has occurred • The protection bits control which operations are allowed • read, write, execute

  23. Paging Issues (Stay Tuned…) • How to make it fast? • Accessing the page table on each memory reference is not workable • How to deal with memory scarcity • Virtual memory • How do we control the memory overhead of page tables? • Need one PTE per page in virtual address space • 32 bit AS with 4KB pages = 220 PTEs = 1,048,576 PTEs • 4 bytes/PTE = 4MB per page table • OS’s typically have separate page tables per process • 25 processes = 100MB of page tables

  24. Strategy #5: Segmentation • Instead of a flat address space, programmers see a collection of segments • Virtual address = segment #, offset heap stack main() Shared library segment # offset

  25. Why Segments? • Facilitates sharing and re-use • Unlike a page, a segment is a logical entity • Segments can have arbitrary size

  26. Hardware support • Segmentation is a generalization of variable-sized partitions • Variable sized partitions: 1 segment per process • Segmentation: N segments per process • So, we need multiple base/limit pairs • These are stored in a segment table • Segments named by segment #, used as index into table • A virtual address is <segment #, offset> • Offset of virtual address added to base address of segment to yield physical address

  27. limit base Segment lookups segment table physical memory segment 0 segment # offset segment 1 virtual address segment 2 yes <? + segment 3 no segment 4 raise protection fault

  28. Pros and cons • Yes, it’s “logical” and it facilitates sharing and reuse • But it has all the horror of a variable partition system • e.g., external fragmentation

  29. Intel x86: Combining segmentation and paging • Use segments to manage logical units • Use pages to partition segments into fixed-size chunks • Each segment has its own page table • There is a page table per segment, rather than per user address space • Memory allocation becomes easy once again • no contiguous allocation, no external fragmentation Segment # Page # Offset within page Offset within segment

  30. Who Uses This? • Nobody :-) • Only supported by x86, so portability is a problem • Linux primarily uses: • 1 kernel code segment, 1 kernel data segment • 1 user code segment, 1 user data segment

More Related