1 / 94

Demand Paged Virtual Memory

Demand Paged Virtual Memory. Motivating demand paging virtual memory How demand paging virtual memory works Issues in demand paging virtual memry Page replace algorithms. Up to this point…. We assume that a process needs to load all of its address space before running

haig
Download Presentation

Demand Paged Virtual Memory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Demand Paged Virtual Memory Motivating demand paging virtual memory How demand paging virtual memory works Issues in demand paging virtual memry Page replace algorithms

  2. Up to this point… • We assume that a process needs to load all of its address space before running • e.g., 0x0 to 0xffffffff • What do those translation schemes do? • Memory abstraction is not perfect. • Logical memory must be less than physical memory. • Observation: 90% of time on 10% of code • Loading the whole program is a waste. • Demand paging virtual memory resolves this problem.

  3. Demand Paging • Demand paging: allows pages that are referenced actively to reside in memory • Remaining pages stay on disk (swap space) • Advantages: • Truly decouples physical memory and logical memory. • Provides the illusion of infinite physical memory. • Each program takes less physical memory. • Less I/O when swapping processes • Fast fork();

  4. Demand Paging: how it works? • The process address space (image) is always in swap space (on disk). • The page table entry (in memory) has three states: • Valid with the page physical address • Invalid with the address in the swap space • Valid page, but not currently in memory • Invalid (truly invalid page) • When a memory access is attempted: • Valid page physical address, normal access • Invalid with the address in the swap space: page fault interrupt, which will bring in the page. • Invalid, error

  5. Address on swap space Truly invalid

  6. Page Fault • Hardware trap • OS performs the following steps while running other processes • Choose a page (to be replaced) • If the page has been modified, write its contents to disk • Change the corresponding page table entry and TLB entry • Load new page into memory from disk • Update page table entry • Restart the instruction

  7. Challenge: Transparent Page Faults • Transparency • A process should not do anything extra for the page faults (OS and the hardware should do everything). • Why is it hard? • Page fault interrupt is different from a typical interrupt! • Page fault could happen in the middle of an instruction.

  8. More on Transparent Page Faults • An instruction may have side effects • Hardware needs to either unwind or finish off those side effects ld r1, x // page fault

  9. dest begin dest end More on Transparent Page Faults • Hardware designers need to understand virtual memory • Unwinding instructions not always possible • Example: block transfer instruction block trans source begin source end

  10. Challenge: Performance • Let p be the probability of page fault: • Ave. time = (1-p) * memory time + p * page fault time • Memory time: 10ns to 200ns • Page fault time: disk access, context switching, etc • In milliseconds • Assuming: memory time = 200ns, page fault time = 8 millisecond, p = 0.1% • Ave time = 99.9% * 200 + 0.1% * 8000000 = 8200 • Performance with demand paging is 41 times worse than the performance without demand paging!!! • Is it still worth doing? Condition???

  11. Page Replacement Policies (algorithms) • Random replacement: replace a random page + Easy to implement in hardware (e.g., TLB) - May toss out useful pages • First in, first out (FIFO): toss out the oldest page + Fair for all pages - May toss out pages that are heavily used

  12. More Page Replacement Policies • Optimal (MIN): replaces the page that will not be used for the longest time + Optimal - Does not know the future • Least-recently used (LRU): replaces the page that has not been used for the longest time + Good if past use predicts future use - Tricky to implement efficiently

  13. More Page Replacement Policies • Least frequently used (LFU): replaces the page that is used least often • Tracks usage count of pages + Good if past use predicts future use - Difficult to replace pages with high counts

  14. Example • A process makes references to 4 pages: A, B, E, and R • Reference stream: BEERBAREBEAR • Physical memory size: 3 pages

  15. FIFO

  16. FIFO

  17. FIFO

  18. FIFO

  19. FIFO

  20. FIFO

  21. FIFO

  22. FIFO

  23. FIFO

  24. FIFO

  25. FIFO

  26. FIFO

  27. FIFO

  28. FIFO

  29. FIFO

  30. FIFO

  31. FIFO • 7 page faults

  32. FIFO • 4 compulsory cache misses

  33. MIN

  34. MIN

  35. MIN

  36. MIN

  37. MIN

  38. MIN

  39. MIN

  40. MIN

  41. MIN

  42. MIN

  43. MIN

  44. MIN • 6 page faults

  45. LRU

  46. LRU

  47. LRU

  48. LRU

  49. LRU

  50. LRU

More Related