1 / 33

W4118 Operating Systems

W4118 Operating Systems. Instructor: Junfeng Yang. Logistics. Homework 5 will be out this evening, due 3:09pm 4/14 You will implement a new page replacement algorithm: Most Recently Used (MRU) Will be clear at end of this lecture. 1. Last lecture.

mabie
Download Presentation

W4118 Operating Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. W4118 Operating Systems Instructor: Junfeng Yang

  2. Logistics • Homework 5 will be out this evening, due 3:09pm 4/14 • You will implement a new page replacement algorithm: Most Recently Used (MRU) • Will be clear at end of this lecture 1

  3. Last lecture • Segmentation: divide address space into logical segments; each has separate base and limit • Advantages • Disadvantages • Combine paging and segmentation • X86 example • Intro to Virtual Memory • Motivation: want to run many processes or a process with large address space using limited physical memory • Observation: locality of reference • Process only needs a small number of pages at any moment • Virtual memory: OS + hardware work together to provide processes the illusion of having fast memory as large as disk

  4. Today • Virtual memory implementation • What are the operations virtual memory must provide? • When to bring a page from disk into memory? • What page in memory should be replace? • Page replacement algorithms • How to approximate LRU efficiently? 3

  5. Virtual Address Space Virtual address maps to one of three locations Physical memory: small, fast, expensive Disk: large, slow, cheap Nothing

  6. Virtual Memory Operation What happens when reference a page in backing store? Recognize location of page Choose a free page Bring page from disk into memory Above steps need hardware and software cooperation How to detect if a page is in memory? Extend page table entries with present bits Page fault: if bit is cleared then referencing resulting in a trap into OS

  7. Handling a Page Fault OS selects a free page OS brings faulting page from disk into memory Page table is updated, present bit is set Process continues execution

  8. Steps in Handling a Page Fault

  9. Continuing Process Continuing process is tricky Page fault may have occurred in middle of instruction Want page fault to be transparent to user processes Options Skip faulting instruction? Restart instruction from beginning? What about instruction like: mov ++(sp), p Requires hardware support to restart instructions

  10. OS Decisions Page selection When to bring pages from disk to memory? Page replacement When no free pages available, must select victim page in memory and throw it out to disk

  11. Page Selection Algorithms Demand paging: load page on page fault Start up process with no pages loaded Wait until a page absolutely must be in memory Request paging: user specifies which pages are needed Requires users to manage memory by hand Users do not always know best OS trusts users (e.g., one user can use up all memory) Prepaging: load page before it is referenced When one page is referenced, bring in next one Do not work well for all workloads Difficult to predict future

  12. Page Replacement Algorithms Optimal: throw out page that won’t be used for longest time in future Best algorithm if we can predict future Good for comparison, but not practical Random: throw out a random page Easy to implement Works surprisingly well. Why? Avoid worst case FIFO: throw out page that loaded in first Easy to implement Fair: all pages receive equal residency LRU: throw out page that hasn’t been used in longest time Past predicts future With locality: approximates Optimal

  13. Page Replacement Algorithms Goal: want lowest page-fault rate Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string In all our examples, the reference string is 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

  14. Optimal Algorithm Replace page that will not be used for longest period of time 4 frames example 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 Problem: how do you know the future? Not practical, but can be used for measuring how well your algorithm performs 1 4 6 page faults 2 3 4 5

  15. First-In-First-Out (FIFO) Algorithm Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 3 frames (3 pages can be in memory at a time per process): 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 4 frames: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 1 1 5 4 2 2 1 10 page faults 5 3 3 2 4 4 3 Problem: ignores access patterns Belady’s anomaly: more frames  more page faults!

  16. First-In-First-Out (FIFO) Algorithm Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 3 frames (3 pages can be in memory at a time per process): 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 4 frames: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5Belady’s Anomaly: more frames  more page faults 1 1 4 5 2 2 1 3 9 page faults 3 3 2 4 1 1 5 4 2 2 1 10 page faults 5 3 3 2 4 4 3

  17. Graph of Page Faults Versus The Number of Frames

  18. FIFO Illustrating Belady’s Anomaly • Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

  19. Least Recently Used (LRU) Algorithm Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 With locality, LRU approximates Optimal algorithm 1 1 1 5 1 2 2 2 2 2 5 4 3 4 5 3 3 4 3 4

  20. Implementing LRU: hardware • A counter for each page • Every time page is referenced, save system clock into the counter of the page • Page replacement: scan through pages to find the one with the oldest clock • Problem: have to search all pages/counters!

  21. Implementing LRU: software A doubly linked list of pages Every time page is referenced, move it to the front of the list Page replacement: remove the page from back of list Avoid scanning of all pages Problem: too expensive Requires 6 pointer updates for each page reference High contention on multiprocessor

  22. Example software LRU implementation

  23. LRU: Concept vs. Reality LRU is considered to be a reasonably good algorithm Problem is in implementing it efficiently Hardware implementation: counter per page, copied per memory reference, have to search pages on page replacement to find oldest Software implementation: no search, but pointer swap on each memory reference, high contention In practice, settle for efficient approximate LRU Find an old page, but not necessarily the oldest LRU is approximation anyway, so approximate more

  24. Clock (second-chance) Algorithm • Goal: remove a page that has not been referenced recently • good LRU-approximate algorithm • Idea • A reference bit per page • Memory reference: hardware sets bit to 1 • Page replacement: OS finds a page with reference bit cleared • OS traverses all pages, clearing bits over time

  25. Clock Algorithm Implementation • OS circulates through pages, clearing reference bits and finding a page with reference bit set to 0 • Keep pages in a circular list = clock • Pointer to next victim = clock hand

  26. Second-Chance (clock) Page-Replacement Algorithm

  27. Clock Algorithm Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 1 1 1 5 1 1 5 5 1 2 0 2 1 1 1 1 1 3 0 3 0 1 3 2 1 4 0 4 0 0 4 4 1 5 1 4 1 4 1 1 0 1 1 5 1 2 0 2 0 2 1 3 0 3 0 3

  28. Clock Algorithm Extension Problem of clock algorithm: does not differentiate dirty v.s. clean pages Dirty page: pages that have been modified and need to be written back to disk More expensive to replace dirty pages than clean pages One extra disk write (5 ms)

  29. Clock Algorithm Extension • Use dirty bit to give preference to dirty pages • On page reference • Read: hardware sets reference bit • Write: hardware sets dirty bit • Page replacement • reference = 0, dirty = 0  victim page • reference = 0, dirty = 1  skip (don’t change) • reference = 1, dirty = 0  reference = 0, dirty = 0 • reference = 1, dirty = 1  reference = 0, dirty = 1 • advance hand, repeat • If no victim page found, run swap daemon to flush unreferenced dirty pages to the disk, repeat

  30. Problem with LRU-based Algorithms • LRU ignores frequency • Intuition: a frequently accessed page is more likely to be accessed in the future than a page accessed just once • Problematic workload: scanning large data set • 1 2 3 1 2 3 1 2 3 1 2 3 … (pages frequently used) • 4 5 6 7 8 9 10 11 12 … (pages used just once) • Solution: track access frequency • Least Frequently Used (LFU) algorithm • Expensive • Approximate LFU: LRU-2Q

  31. Problem with LRU-based Algorithms (cont.) • LRU does handle repeated scan well when data set is bigger than memory • 5-frame memory with 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 • Solution: Most Recently Used (MRU) algorithm • Replace most recently used pages • Best for repeated scans

  32. Memory Management Summary • Different memory management techniques • Contiguous allocation • Paging • Segmentation • Paging + segmentation • In practice, hierarchical paging is most widely used; segmentation loses is popularity • Some RISC architectures do not even support segmentation • Virtual memory • OS and hardware exploit locality to provide illusion of fast memory as large as disk • Similar technique used throughout entire memory hierarchy • Page replacement algorithms • LRU: Past predicts future • No silver bullet: choose algorithm based on workload

  33. Current Trends • Less critical now • Personal computer v.s. time-sharing machines • Memory is cheap  Larger physical memory • Larger page sizes (even multiple page sizes) • Better TLB coverage • Smaller page tables, less page to manage • Internal fragmentation • Larger virtual address space • 64-bit address space • Sparse address spaces • File I/O using the virtual memory system • Memory mapped I/O: mmap()

More Related