180 likes | 450 Views
Tutorial 8. Virtual Memory: PAGING presented by: Antonio Maiorano Paul Di Marco. Paging. Primary Memory (RAM) is divided into fixed size partitions we call frames Processes also divided into same sized partitions we call pages Result: almost no fragmentation. Process Location.
E N D
Tutorial 8 Virtual Memory: PAGING presented by: Antonio Maiorano Paul Di Marco
Paging • Primary Memory (RAM) is divided into fixed size partitions we call frames • Processes also divided into same sized partitions we call pages • Result: almost no fragmentation
Process Location • Most of the process is on Secondary Memory (HD) • Some of the process pages are loaded into frames in RAM • In RAM, the frames allocated to a process need not be contiguous • Uses a page table to map pages to frames
What’s the point? • Many more processes can run at the same time • Works well because: • “Reference locality behaviour” • Well-supported by hardware
How does it work? • For each memory reference, Paging System must be able to translate each virtual address into a physical address at run-time • Address = <page/frame num, offset> • Ex: 32 bits = <24 bits, 8 bits>= <16777216 pages, 256 locations per page>
In the CPU Virtual Address Page # Offset # Missing Page MAP Frame # Offset # Physical Address
Page Reference Stream • For a given process, the page reference stream is the list of virtual page numbers ordered according to when they are referenced • Example: Process with 5 pages: 1, 2, 3, 1, 2, 3, 4, 5
Static Paging Algorithms • When process starts, it is allocated a fixed number of frames in RAM • Paging Policies: defines how pages will be loaded/unloaded into frames allocated to the process
3 Basic Paging Policies • Fetch Policy: determines when a page should be loaded into RAM • Replacement Policy: if all frames are full, determines which page should be replaced • Placement Policy: determines where a fetched page should be placed in RAM
Demand Paging • Since the page reference stream is not known ahead of time, we can’t “pre-fetch” pages • We know which page to load during run-time so we load them on “demand” • Result: concentrate on replacement policies (#3)
Demand Paging Algorithms Random, Optimal, LRU, LFU, FIFO
Random Replacement • Choose a page at random to be the victim page • Simple to implement, but is the worst algorithm: • No knowledge is taken into account! • Each page thus has same probability of being the victim
Optimal Algorithm • Best algorithm; however requires knowledge of the future page references (!) • Always results in least possible page faults • Replace the page that will be referenced the latest from now
0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7 0 0 0 0 0 0 0 0 0 1 1 1 4 4 4 7 fr 0 1 1 1 1 1 2 2 2 2 2 2 2 5 5 5 fr 1 fr 2 2 3 3 3 3 3 3 3 3 3 3 3 6 6 Optimal Algorithm Example • The process’ reference stream is: 0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7 • The process is allocated 3 frames
0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7 0 0 0 3 3 3 2 2 2 1 1 1 4 4 4 7 fr 0 1 1 1 0 0 0 3 3 3 2 2 2 5 5 5 fr 1 fr 2 2 2 2 1 1 1 0 0 0 3 3 3 6 6 Least Recently Used (LRU) • Replace the page which was last referenced the longest time ago
0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7 0 0 3 3 0 0 3 3 0 0 0 0 0 0 0 3 fr 0 1 1 1 1 1 1 1 1 1 3 1 1 1 3 1 fr 1 fr 2 6 5 4 7 2 2 3 2 3 3 2 2 2 2 Least Frequently Used (LFU) • Replace the page which was least frequently referenced (since the beginning of the reference stream)
0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7 0 0 0 3 3 3 2 2 2 1 1 1 4 4 4 7 fr 0 1 1 1 0 0 0 3 3 3 2 2 2 5 5 5 fr 1 fr 2 2 2 2 1 1 1 0 0 0 3 3 3 6 6 First In First Out (FIFO) • Replace the page that has been in memory longest
Comparison of the samples • For the same reference stream and number of available frames: • Optimal caused 10 page faults • LRU caused 16 page faults • LFU caused 12 page faults • FIFO caused 16 page faults