190 likes | 280 Views
Introduction to Systems Programming Lecture 8. Paging Design. Steps in Handling a Page Fault. Virtual Physical mapping. CPU accesses virtual address 100000 MMU looks in page table to find physical address Page table is in memory too Unreasonable overhead!.
E N D
Introduction to Systems Programming Lecture 8 Paging Design
VirtualPhysical mapping • CPU accesses virtual address 100000 • MMU looks in page table to find physical address • Page table is in memory too • Unreasonable overhead!
TLB: Translation Lookaside Buffer • Idea: Keep the most frequently used parts of the page table in a cache, inside the MMU chip. • TLB holds a small number of page table entries: Usually 8 – 64 • TLB hit rate very high because, e.g., instructions fetched sequentially.
A TLB to speed up paging • Example: • Code loops through pages 19,20,21 • Uses data array in pages 129,130,140 • Stack variables in pages 860,861
Valid TLB Entries • TLB miss: • Do regular page lookup • Evict a TLB entry and store the new TLB entry • Miniature paging system, done in hardware • When OS does context switch to a new process, all TLB entries become invalid: • Early instructions of new process will cause TLB misses.
TLB placement/eviction • Done by hardware • Placement rule: • TLBIndex = VirtualAddr modulo TLBSize • TLBSize is always 2k TLBIndex = k least-significant bits • Keep “tag” (rest of bits) to fully identify virtual addr • Virtual address can be only in one TLB index • No explicit “eviction”: simply overwrite what is in TLB[TLBIndex]
TLB + Page table lookup Virtual address In pagetable? Page fault:copy fromdisk to memory In TLB? No No Yes; update TLB Yes Physical address
TLB – cont. • If address is in TLB page is in physical memory • OS invalidates TLB entry when evicting a page • So page fault not possible if we have a TLB hit • “page fault rate” is computed only on TLB misses
TLB lookup: 4ns Phys mem access: 10ns Disk access: 10ms TLB miss rate: 1% Page fault rate: 0.1% Example: Average memory access time • Assume page table is in memory. TLB hit p=0.99, time=4ns+10ns Page hit: p=0.01*0.999, time=4ns+10ns+10ns TLB miss Page fault: p=0.01*0.001, time=4ns+10ns+10ms+10ns Average memory access: 114.1ns (1.141*10-7)
Local versus Global Allocation Policies:Physical Memory • Original configuration – ‘A’ causes page fault • Local page replacement • Global page replacement
Local or Global? • Local number of frames per process is fixed • If working set grows thrashing • If working set shrinks waste • Global usually better • Some algorithms can only be local (working set, WSClock).
How many frames to give a process? • Fixed number • Proportional to its size (before load) • Zero, let it issue page faults for all its pages. • This is called pure demand paging. • Monitor page-fault-frequency (PFF), give more pages if PFF high.
Page fault rate as a function of the number of page frames assigned
Load Control • Despite good designs, system may still thrash • When PFF algorithm indicates • some processes need more memory • but no processes need less • Solution: Reduce number of processes competing for memory • swap one or more to disk, divide up frames they held • reconsider degree of multiprogramming
Cleaning Policy • Need for a background process, paging daemon • periodically inspects state of memory • When too few frames are free • selects pages to evict using a replacement algorithm • It can use same circular list (clock) • as regular page replacement algorithm but with diff ptr
Windows XP Page Replacement • Processes are assigned working set minimum and working set maximum • Working set minimum is the minimum number of page frames the process is guaranteed to have in memory • A process may be assigned as many page frames up to its working set maximum • When the amount of free memory in the system falls below a threshold, automaticworking set trimming is performed to restore the amount of free memory • Working set trimming removes frames from processes that have more than their working set minimum