370 likes | 421 Views
Paging. Fragmentation Need for compaction/swapping A process size is limited by the available physical memory Dynamic growth of partition is troublesome No winning policy on allocation/deallocation. Memory Partitioning Troubles. P2. P3. The Basic Problem.
E N D
Fragmentation Need for compaction/swapping A process size is limited by the available physical memory Dynamic growth of partition is troublesome No winning policy on allocation/deallocation Memory Partitioning Troubles P2 P3
The Basic Problem • A process needs a contiguous partition(s) But • Contiguity is difficult to manage Contiguity mandates the use of physical memory addressing (so far)
The Basic Solution • Give illusion of a contiguous address space • The actual allocation need not be contiguous • Use a Memory Management Unit (MMU) to translate from the illusion to reality
A solution: Virtual Addresses • Use n-bit to represent virtual or logical addresses • A process perceives an address space extending from address 0 to 2n-1 • MMU translates from virtual addresses to real ones • Processes no longer see real or physical addresses
Paged Memory • Subdivide the address space (both virtual and physical) to “pages” of equal size • Use MMU to map from virtual pages to physical ones • Physical pages are called frames
Paging: Example Virtual Physical Virtual 0 0 Process 1 Process 0
Key Facts • Virtual address spaces of different processes are independent • Two or more may have the same address range • Yet the mappings differentiate between them • A virtual page has no storage of its own • It must be backed by a physical frame (real page) that provides the actual storage • A contiguous virtual space need not be physically contiguous
Key Facts • Physical address space is independent of virtual address spaces • They can have different sizes • Allows process size to be independent of available physical memory size • Page size is always a power of 2, to simplify hardware addressing
Page Tables Virtual 0 Page Table
Indexed by virtual page number Contains frame number (if any) Contains protection bits Contains reference bit Page Table Structure Frame No. v w r x f m Frame No. v w r x f m Frame No. v w r x f m v: valid bit w: write bit r: read bit x: execute bit (rare) f: reference bit m: modified bit
Mapping Virtual to Real Addresses n bits Virtual address virtual page number offset s bits index into page table s bits frame no. offset p bits s: log (page size) Physical address
Example: PDP-11 Page size: 8K Up to 4M mem 16bits Virtual address vpn offset 13bits 13bits frame no. offset 8-entry page table 22bits Physical address (in hardware)
Weird Stuff: Free Page Management Virtual Physical Virtual Process 0 Process 1 Key fact: A memory frame cannot be accessed unless mapped Free space
Fun Stuff: Sharing Virtual Physical Virtual 0 0 Process 1 Process 0
Sharing • Processes can share pages by mapping their virtual pages to the same memory frame • In UNIX and Windows, code segments of processes running the same program to share the pages containing executables • saves a lot of memory • saves loading time for frequently run programs • Fine tuning using protection bits (rwx)
Fork Revisited: Copy-on-Write Virtual Physical Virtual W W W W W W Father Child
Copy-on-Write Fork • Initially no memory copying • Efficient • If an exec follows, no much harm • Page tables set the protection to disallow writes • If either father or child attempts to write, a page fault occurs
Copy-on-Write Virtual Physical Virtual W W W W W W Father Child
Requirements for Sharing • Page frames must have a reference count • Cannot be deallocated unless unmapped • Adds complexity to memory manager • Protection bits must be set properly
Page Table Implementations Key issues: • Each instruction requires one or more memory accesses: • Mapping must be done very quickly • Where do we store the page table? • It is now part of the context, with implications on context switching • Hardware must support auxiliary bits
PDP-11 Example Revisited • Page table is small (8 entries) • Can be implemented in hardware • Moderate effect on context switching time • Each process will need an 8-entry array in its PCB to store page table when not running • Protections works fine But: what if address space is 32-bit
Page Table Size Problems Assume 16K page size and 32-bit address space Then: • For each process, there are 219 virtual pages • Page table size: 219 * 4 bytes/entry • Page table requires 2 Mbytes • Cannot be stored in hardware, slowing down the mapping from virtual to physical addressing
Solution1: Multi-Level Page Tables • Use two or three levels page tables • All entries in the topmost level must be there • Entries in lower levels are there only if needed • Store page tables in memory • Have one CPU register contain address of top-most level table
Example: SPARC Context 3-level page table index 1 index 2 index 3 offset 8 6 6 12 Page Level 1 Level 2 Level 3 Context table (up to 4K registers)
SPARC: Cont’d • Only level 1 table need be there entirely • 256 entries * 4 bytes/entry = 1K /process • Context switching is not affected • Just save and restore the context register/process • Second and third level tables are there only if necessary
Translation Lookaside Buffer • A small associative memory in processor • Contains recent mapping results • Typically 8 to 32 entries • If access is localized, works very well • Must be flushed on context switching • If TLB misses, then must resolve through page tables in main memory (slow)
Other Varieties • 2-level page table in VAX systems • 4-level page table in the 68030/68040 • Organize the cache memory by virtual addresses (instead of physical addresses) • Remove the TLB from critical path • Combine cache misses with address translations • e.g. MIPS 3000/4000
Solution 2: Inverted Page Tables Rationale: • Conventional per-process page tables depend on virtual memory size • Virtual address space is getting larger (e.g. 64 bits) • Size of physical memory projected is less than virtual memory in foreseeable future
Inverted Page Table Main Idea: • Global page table indexed by frame number • Each entry contains virtual address & pid • Use TLB to reduce the need to access PT • On a TLB miss: • Search page table for the <virtual address, pid> • Physical address is obtained from the index of the entry (frame number)
Indexed by frame number Entry contains virtual address and pid using the frame number (if any) Contains protection & reference Inverted Page Table Structure pid Virtual addr v w r x f m pid Virtual addr v w r x f m pid Virtual addr v w r x f m v: valid bit w: write bit r: read bit x: execute bit (rare) f: reference bit m: modified bit
Mapping Virtual to Real Addresses n bits Virtual address virtual page number offset s bits + pid: search key into inverted page table s bits frame no. offset p bits Physical address
Properties & Problems • Table size is independent of virtual address size, only function of physical memory size • TLB misses are expensive • We don’t know where to look • May require searching entire table (very bad) • Virtual memory more expensive • Sharing becomes very difficult
Organize Inverted Page Table as a hash table Search key <pid, vaddr> Search in hardware or software Examples: IBM 38, RS/6000, HP PA-RISC systems A Solution
Sharing & Inverted Page Tables • Size no longer limited, so no system implements it • Conceivably possible with a general hashing function • Requires an additional field in page table entry Frame no. pid Virtual addr v w r x f m Frame no. pid Virtual addr v w r x f m Frame no. pid Virtual addr v w r x f m