1 / 38

Virtual Memory Optimization Techniques Overview

Explore fetch, placement, replacement policies in virtual memory management for optimal system performance. Understand algorithms like LRU, FIFO, and NRU. Compare precision and implementation challenges.

toddrobert
Download Presentation

Virtual Memory Optimization Techniques Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 3.3 : OS Policies for Virtual Memory • Fetch policy • Placement policy • Replacement policy • Resident set management • Cleaning policy • Load control From : Operating Systems by W. Stallings, Prentice-Hall, 1995

  2. Fetch Policy • Demand Paging: Pages are fetched when needed (ie., when a page fault occurs) • Process starts with a flurry of page faults, eventually locality takes over • Prepaging (anticipatative): Pages other than the one needed are brought in • Prepaging makes use of disk storage characteristics. If pages are stored contiguously it may be more efficient to fetch them in one go • The policy is ineffective if the extra pages are not referenced

  3. Placement Policy • The placement policy is concerned with determining where in real memory a process piece is to reside • With anything other than pure segmentation this is not an issue (refer to best-fit, first-fit etc.)

  4. Replacement Policy • All page frames are used. A page fault has occurred. New page must go into a frame. • Which one do you take out?

  5. Replacement Algorithm Objectives • The page being replaced should be the page least likely to be referenced in the near future • There is a link between past history and the future because of locality • Thus most algorithms base their decision on past history

  6. Scope of Replacements • The set of frames from which these algorithms choose is based on the scope • Local Scope: Only frames belonging to the faulting process can be replaced. • Global Scope: All frames can be replaced • Some frames will be locked (e.g. Kernel, system buffers etc.,)

  7. Replacement Algorithms • Optimal • Not-recently-used (NRU) • First-in, first-out (FIFO) • Least recently used (LRU) • Not frequently used (NFU) • Modified NFU (~LRU)

  8. Optimal Replacement Algorithm • Replace the page which is least likely to be referenced or for which the time to the next reference is the longest (Replace page needed at the farthest point in future) • This algorithm is impossible to implement because OS must have perfect knowledge of future events • This algorithm is used to compare other algorithms

  9. 2 2 2 2 3 5 1 5 3 2 5 4 2 2 2 2 2 2 2 4 4 2 4 2 3 3 3 3 3 3 3 3 3 3 3 5 5 5 5 5 5 1 5 5 R R R Optimal Replacement Example

  10. Optimal Replacement Example (Cont.) • 1’st page fault : page 1 is replaced by page 5 because page 1 does not appear in the page address stream in the future • 2’nd page fault: page 2 is replaced by page 4 because page 2 will be referenced after two (pages 5 and 3) references • 3’rd page fault: page 4 is replaced by 2 because page 4 is not in the stream any more

  11. Not-Recently-Used (NRU) • Replace the page which is not used recently • Use the referenced (R) & modified (M) bits in the page table entry • R bit is set when the page is referenced (read or written) • M bit is set when the page is modified (contents changed - written)

  12. NRU Implementation • Hardware • R & M bits are set by hardware on every memory reference • Software • R & M bits in the page table entry is modified at page faults

  13. NRU Algorithm • When process starts both R & M bits are cleared • R bit is cleared on every clock interrupt • At a page fault, a page from the lowest numbered nonempty class is removed: • Class 0 : not referenced, not modified • Class 1 : not referenced, modified • Class 2 : referenced, not modified • Class 3 : referenced, modified • Class 1 appears when clock interrupt clears R bits of class 3 processes

  14. First-in First-out (FIFO) • Replace the page which has been in memory for the longest time • Simple to implement (use a circular buffer) • There is a chance that the oldest page may be used heavily (thrashing - page moves back & forth between memory & disk) • Inspect R & M bits (ie., classes) to skip over heavily used pages to decrease thrashing

  15. 5 1 2 4 2 2 5 2 3 3 2 5 5 3 2 2 2 2 3 3 5 5 5 3 3 2 3 2 5 3 5 3 2 2 2 4 4 2 1 1 1 4 4 4 FIFO Example R R R R R R

  16. Least Recently Used (LRU) • Replace the page which has been unused for the longest time • Does almost as well as optimal • Implementation poses overheads • Implementation uses a time stamp for each page (time of last reference)

  17. 2 5 2 3 5 4 2 1 2 5 3 2 2 2 2 2 2 3 3 3 3 2 2 2 5 5 5 3 3 5 5 5 5 5 3 1 2 2 2 1 4 1 4 4 R R R LRU Example R

  18. LRU Example (Cont.) • 1’st page fault: page 5 replaces page 3 because page 3 hasn’t been referenced in the last two references • 2’nd page fault: page 4 replaces page 1 because page 1 hasn’t been referenced in the last two references • 3’rd page fault: page 3 replaces page 2 because page 2 hasn’t been referenced in the last two references • 4’th page fault: page 2 replaces page 4 because page 4 hasn’t been referenced in the last two references

  19. Hardware Implementation of LRU • Maintain a linked list of all pages ordered from the most recently used to the least recently used. Maintaining a list on every instruction execution is very expensive and time consuming • An approximate solution • A hardware counter is incremented after each instruction • Page table also has a field to store the number of references (counter value) • At a page fault remove the page which has the minimum number of references

  20. Software Implementation of LRU : Not Frequently Used (NFU) • At every clock interrupt the R bit value is added to the fields in page tables • At a page fault, replace the page with the minimum counter value • Since the counter update is done at every clock interrupt, the algorithm is an approximation

  21. # 1 2 3 4 5 Process 0 0 1 2 3 4 4 1 0 0 1 2 2 3 2 0 1 1 1 1 2 3 0 0 0 1 1 1 4 0 1 2 2 3 3 5 0 1 1 2 2 2 R bits 101011 110010 110101 100010 011000 NFU Example

  22. Modified NFU : ~LRU • Aging of pages • Algorithm • Shift right counter values (in page table) by 1 • Put R bit value (0 - not referenced or 1- referenced ) as the new leftmost bit at every clock tick • At a page fault replace the page with the lowest counter value (this is the least recently used page)

  23. Modified NFU : ~LRU (Cont.) • Example Suppose counter is 1 0 1 1 0 & R bit = 0 New counter becomes 0 1 0 1 1 • Leading zeros represent the number of clock ticks that the page has not been referenced

  24. Differences Between NFU & ~LRU • NFU counts the number of times that a page is accessed in a given period • ~LRU incorporates a time factor by shifting right (aging) the counter value

  25. Resident Set Management • Resident Set: Set of a process' pages which are in main memory • OS must manage the size and allocation policies which effect the resident set

  26. Factors Include • Smaller resident set per process, implies more processes in memory, implies OS will always find at least one ready process (if not swapping must occur) • If process' resident set is small, page fault frequency will be higher • Increasing the resident set size beyond some limit does not effect page fault frequency

  27. Resident Set Management Policies • Fixed allocation policy Each process has a fixed number of pages • Variable allocation policy The number of pages per process can change

  28. Allocation vs Scope

  29. Fixed Allocation, Local Scope • Frame number per process is decided beforehand and can't be changed • Too Small: High page faults, poor performance • Too Large: Small number of processes, high processor idle time and/or swapping

  30. Variable Allocation, Global Scope • Easiest to implement • Processes with high page fault rates will tend to grow. However replacement problems exist

  31. Variable Allocation, Local Scope • Allocate new process a resident set size • Prepaging or demand to fill up allocation • Select replacement from within faulting process • Re-evaluate allocation occasionally

  32. 1 Routine 1 2 Routine 2 3 4 5 Main loop 6 7 8 Main program Working Set (Denning’s) • Program Execution • Start with main program (page 8) • Main program loads the main loop (pages 4,5,6,7) • Program executes in the main loop for 20 seconds • Then routine 1 is called (page 1) which executes 2 seconds • Finally routine 2 (pages 2, 3) is called to execute for 3 seconds

  33. Working Set (Cont.) • In the previous example, the process needs pages 4, 5, 6, 7 for most of the time (20 seconds in a total of 25 seconds execution time) • If these pages are kept in memory the number of page faults will decrease. Otherwise, thrashing may occur • The set of pages that a process is currently using is called its working set • Rule: Do not run a process if its working set can not be kept in memory

  34. The Working Set Page Replacement Algorithm • The working set is the set of pages used by the k most recent memory references • w(k,t) is the size of the working set at time, t

  35. Working Set Implementation • Use aging as in ~LRU • Pages with 1 (referenced) in “n” clock ticks are assumed to be a member of the working set • “n” is experimentally determined

  36. Cleaning Policy • Cleaning Policy: Deciding when a modified page should be written out to secondary memory • Demand Precleaning: Page is written out only when it has been selected for replacement • Means a page fault may imply two I/O operations which severely effects performance

  37. Cleaning Policy (Cont.) • Precleaning: Pages written before frames are needed (so they can be written out in batches) • No sense writing out batches of pages and then finding them changed again • Page Buffering is a nice compromise

  38. Load Control • Load Control: Controlling the number of processes that are resident in memory • Too few processes imply lack of ready processes, implies swapping • Too many processes implies high page fault frequency which leads to thrashing

More Related