180 likes | 359 Views
Lecture 10, 19 November 2013. OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL. Chap. 4 Memory Management 4.4 Page Replacement Algorithms 4.5 Design Issues 4.6 Segmentation. 4.4 Page Replacement Algorithms. Page fault forces choice
E N D
Lecture 10, 19 November 2013 OPERATING SYSTEMSDESIGN AND IMPLEMENTATIONThird EditionANDREW S. TANENBAUMALBERT S. WOODHULL Chap. 4 Memory Management 4.4 Page Replacement Algorithms 4.5 Design Issues 4.6 Segmentation
4.4 Page Replacement Algorithms • Page fault forces choice • which page must be removed • make room for incoming page • Modified page must first be saved • unmodified just overwritten • Better not to choose an often used page • will probably need to be brought back in soon Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall
4.4.1 Optimal Page Replacement Algorithm • Replace page needed at the farthest point in future • Optimal but unrealizable • Estimate by … • logging page use on previous runs of process • although this is impractical Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall
4.4.2 Not Recently Used Page Replacement Algorithm • Each page has Reference bit, Modified bit • bits are set when page is referenced, modified • R bits are cleared periodically • Pages are classified Class 0: not referenced, not modified Class 1: not referenced, modified Class 2: referenced, not modified Class 3: referenced, modified • NRU removes page at random • from lowest numbered non empty class Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall
4.4.3 FIFO Page Replacement Algorithm • Maintain a linked list of all pages • in order they came into memory • Page at beginning of list is replaced • Disadvantage • page in memory the longest may be often used, and should not be replaced ! Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall
4.4.4 Second Chance Page Replacement Algorithm Fig. 4.14 Operation of a second chance • pages sorted in FIFO order • If page fault occurs at time 20 and A has R bit set, it is moved to the end of the list (numbers above pages are loading times) Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall
4.4.5 The Clock Page Replacement Algorithm Fig. 4.15 The clock page replacement algorithm Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall
4.4.6 Least Recently Used (LRU) • Assume pages used recently will used again soon • throw out page that has been unused for longest time • Solution 1: keep a linked list of pages • most recently used at front, least at rear • update this list every memory reference !! • Solution 2: ‘timestamp’the page table entry for the page just referenced with a counter • choose page with lowest value counter Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall
4.4.6 Least Recently Used (LRU): solution 3 • Data structure: Use a matrix of n*n bits, where n is the number of the page table entries • Algorithm: If page x is referenced, set all 1 in the x row, and all 0 in the column 0 • Property: The row whose binary value is lowest is the least recently used Fig. 4.16 LRU using a matrix – pages are referenced in the order 0, 1, 2, 3, 2, 1, 0, 3, 2, 3 0 1 2 3 2 1 0 3 2 3 Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall
4.4.7 Simulating LRU in Software • Idea • a counterand a R bit are associated to each page of the page table • the counter roughly keeps track how often each page has been ref. in the near past: • at every clock, all counters are shifted right by 1 bit and the associated R bit is added Fig. 4.17 The aging algorithm simulates LRU in software. Shown are 6 pages for 5 clock ticks, (a) – (e) Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall
4.5.1 The Working Set Model Definition. The working set is the set of pages used by the k most recent memory references, and w(k,t) is the size of the working set at time t. Property. w(k,t) is a monotonically nondecreasing function of k (Hint: larger k means looking further in the past k Fig. 4.18 Tanenbaum & Woo)dhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall
The Working Set Page Replacement Algorithm (i.e. current process time) Fig. 3.20 The working set algorithm R bit is cleared at every ‘clock tick’ (tau is assumed to span several ‘clock ticks’) Tanenbaum, Modern Operating Systems, 3rd ed., (c) 2009, Prentice-Hall
The WSClock Page Replacement Algorithm Fig. 3.21 Operation of the WSClock algorithm 2014 2014 Tanenbaum, Modern Operating Systems, 3rd ed., (c) 2009, Prentice-Hall
Review of Page Replacement Algorithms Tanenbaum, Modern Operating Systems, 3rd ed., (c) 2009, Prentice-Hall
4.6 Segmentation • Fact. A compiler has many tables that are built up as compilation proceeds, possibly including: • The source text being saved for the printed listing (on batch systems). • The symbol table – the names and attributes of variables. • The table containing integer, floating-point constants used. • The parse tree, the syntactic analysis of the program. • The stack used for procedure calls within the compiler. Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall
4.6 Segmentation: bumping problem Fig. 4.21 One-dimensional address space with growing tables, one table may bump into another Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall
4.6 Segmentation: segmented memory Fig. 4.22 A segmented memory allows each table to grow or shrink, independently of the other tables Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall
4.6 Segmentation: segmentation vs paging Fig. 4.23 Comparison of paging and segmentation Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall