180 likes | 188 Views
This article explores the objectives, policies, and implementation of virtual memory, including strategies such as paging and segmentation. It discusses address translation mapping, demand paging algorithms, and page management structures.
E N D
Objectives • Avoid copy/restore entire address space • Avoid unusable holes in memory • Increase program RAM past physical limits • Allocation based on virtual memory policy • Freedom from user requirements • Extended abstraction for users • Strategies • Paging - fixed sized blocks called pages • Segmentation - variable sized segments
Address Translation Mapping • Done at runtime • Only the part being used is loaded • Actually a small initial page-set > 1 page • Page size determined by the O/S • Instruction execution proceeds until an "addressing" or "missing data" fault • O/S gets control • Loads missing page • Re-start the instruction with new data address
Mapping -2 • Formally: βt : vaddress paddress {Ω} • Βt is a time-varying map • t is the process's virtual time • Virtual memory manager implements the mapping. ANY mechanism is valid if it follows the definition. • βt(i) will be either: • Real address of virtual address i • Ω
Concepts • Entire virtual address space on disk • Small set of virtual addresses bound to real addresses at any instant • Virtual addresses are scattered • Page size depends on hardware • Page size usually = page frame size • Counter-example (OS/VS2 (2k) on VM (4k)) • # page frames computed from physical memory constraints
Computations • Page size=2h& Page frame size=2h • Usually constrained by hardware protection • Number of system pages: n = 2g • Number of process pages/frames: m = 2j • For aprocess: • Number virtual addresses: G= 2g2h= 2g+h • Number physical addresses: H=2j+h • FYI: • For Pentiums: g = 20 h=12 (page size=4K) • So G = 4 GB (max program size, including O/S)
Processing Ω • If function returns Ω • Find location i on disk • Bring it into main memory • Re-translate i • Re-start the instruction • Significant overhead • O/S context switch • Table search • I/O for missing address
Segmentation & Paging • Segmentation • Programs divided into segments • Location references are <seg#, offset> • I/O (Swap) whole segments (variable sized) • Programmer can control swapping • External fragmentation can occur • Paging • I/O (page) fixed-size blocks • Location references are linear • No programmer control of paging
Description of Translation • n = pages in virtual space • m =allocated frames • i is a virtual address 0<=i<=G G= 2g+h • k=a physical memory address =U*2h + V 0<=V<2h • c page size=2h • Page number = (i/c) • U is the page frame number • V=line number (offset in page) = i mod c
Policies • Fetch - when to load a page • Replace - victim selection (when full) • Position (placement) - (when not full) • # allocated frames is constant • Page reference stream - numbers of the pages a P references in order of reference
Paging Algorithm • Page fault occurs • Process with missing page is interrupted • Memory manager locates missing page • Page frame is unloaded (replacement policy) • Page is loaded into vacated page frame • Page table is updated • Process is restarted
Demand Paging Algorithms • Random - many 'missing page' faults • Belady - for comparisons only • Least Recently Used - if recently used, will be again, so dump the LRU page • Least Frequently Used - dump the most useless page - influenced by locality, slow to react • LRU, LFU are both Stack algorithms
Page Mgmt Structures Virtual address PTE A - Assigned D - Dirty K - Prot. Key
Page Table Lookup Page Table 1 entry per Page Pg# Process page-table ptr (a register)
Inverted Page Tables • Useful for locating in-machine pages • Extract virtual page # (VPN) from address • Hash the VPN to an index • Search the table for this index • Each entry has VPN, frame# (PPN) • Efficient for small memories • Collisions must be resolved • Note: uses page#, not address
Inverted Page Tables -2 • Regular page table • Uses virtual page # directly • Entry per page • Inverted table • Uses virtual page # as hash input • Sparse lookup table • Finds frame if in memory • Followed by disk address lookup if needed • Entry per frame
Inverted Page Tables Hashing Function 1 entry per Frame
Translation Lookaside Buffer If no TLB 'hit' TLB - h/w-cached simultaneous (associative) lookup S/W lookup table line by line lookup hit no hit offset frame