460 likes | 623 Views
Practical, Transparent Operating System Support for Superpages. J. Navarro Rice University and Universidad Católica de Chile S. Iyer, P. Druschel, A. Cox Rice University. Paper Highlights. Presents an general efficient mechanism to manage pages of different sizes in a VM system Superpages
E N D
Practical, Transparent Operating System Support for Superpages J. NavarroRice University and Universidad Católica de Chile S. Iyer, P. Druschel, A. CoxRice University
Paper Highlights • Presents an general efficient mechanism to manage pages of different sizes in a VM system • Superpages • Objective is to address the limitations of extant translation lookaside buffers (TLB).
Page number Page frame number Bits The translation look aside buffer (I) • Small high-speed memory • Contains a fixed number of page table entries • Content-addressable memory • Entries include page frame number and page number
The translation look aside buffer (II) • Usually fully associative • Not always true (see Intel Nehalem) • Considerably fewer entries than an L1 cache • Speed considerations
Realizations (I) Do not even attempt to memorize this! • TLB of ULTRA SPARC III • 64-bit addresses • Maximum program size is 244 bytes, that is,16 TB • Supported page sizes are 4 KB, 16KB, 64 KB, 4MB ("superpages") • External L2 cache had a maximum capacity of 8 MB.
Realizations (II) Do not even attempt to memorize this! • TLB of ULTRA SPARC III • Dual direct-mapping TLB • 64 entries for code pages • 64 entries for data pages • Each entry occupies 64 bits • Page number and page frame number • Context • Valid bit, dirty bit, …
Realizations (III) Do not even attempt to memorize this! • Intel Nehalem Architecture: • Two-level TLB: • First level: • Two parts • Data TLB has 64 entries for 4K pages (4K) or 32 for big pages (2M/4M) • Instruction TLB has 128 entries for 4K pages and 7 for big pages.
Realizations (IV) Do not even attempt to memorize this! • Second level: • Unified cache • Can store up to 512 entries • Operates only with 4K pages
The main problem • TLB sizes have not grown with sizes of main memories • Define TLB coverage as amount of main memory that can be accessed without incurring TLB misses • Typically one gigabyte or less • Relative TLB coverage is fraction of main memory that can be accessed without incurring TLB misses
Back to our examples Do not even attempt to memorize this! • Ultra SPARC III • with 4 KB pages: • (64 + 64)×4 KB = 512 KB • with 16 KB pages: • (64 + 64)×16 KB = 2 MB
Back to our examples • Intel Nehalem • with 4 KB pages: • Level 1: • (64 + 128)×4 KB = 768 KB • Level 2: • 512×4 KB = 2 MB
Consequences • Processes with very large working sets incur too many TLB misses • "Significant performance penalty" • Some machines have L2 caches bigger than their TLB coverage • Can have TLB misses for data already in L2 cache
Solutions (I) • Increase TLB size: • Would increase TLB access time • Would slow down memory accesses • Increase page sizes: • Would cause increased memory fragmentation and poor utilization of main memory
Solutions (II) • Use multiple page sizes: • Keep a relatively small "base" page size • Say 4 KB • Let them coexist with much larger page sizes • Superpages • Intel Nehalem solution
Hardware limitations (I) • Superpage sizes must be supported by hardware: • 4 KB, 16KB, 64 KB, 4MB for UltraSPARC III • 4 KB, 2 MB and 4 MB for Intel Nehanem • Ten possible page sizes from 4KB to 256M for Intel Itanium
Hardware limitations (II) • Superpages must be contiguous andproperly aligned in bothvirtual and physical address spaces • Single TLB entry for each superpage • All its base pages must have • Same protection attributes • Same clean/dirty status • Will cause problems
Allocation • When we bring a page in main memory, we can • Put it anywhere in RAM • Will need to relocate it to a suitable place when we merge it into a superpage • Put it in a location that would let us "grow" a superpage around it:reservation-based allocation • Must pick a maximum size for the superpage
Fragmentation control • The OS must keep contiguous chunks of memory availably at any time • OS will break previous reservation commitments if the superpage is unlikely to materialize • Must "treat contiguity a a potentially contended resource"
Promotion • Once a sufficient number of base pages within a potential superpage have been allocated, the OS may elect to promote them into a superpage.This requires • Updating PTEs for all bases pages in the new superpage • Bringing the missing base pages into main memory
Promotion • Promotion can be incremental • Progressively larger and larger superpages In use In use In use In use In use In use Free Free Superpage In use Free
Demotion • OS should disband or reduce the size of a superpage whenever some portions of it fall in disuse • Main problem is that OS can only track accesses at the level of the superpage
Eviction • Not that different from expelling individual base pages • Must flush out all base pages of any superpage containing dirty pages • OS cannot ascertain which base pages remain clean
Related approaches Many OS kernels use superpages Focus here is on application memory
Reservations • Talluri and Hill: • propose a reservation-based scheme • reservations can be preempted • emphasis is on partial subblocks • HP-UX and IRIX • Create superpages at page fault time • User must specify a preferred per segment page size
Page relocation • Relocation-based schemes • Let base pages reside any place in main memory • Migrate these pages to a contiguous region in main memory when they find out that superpages are "likely to be beneficial." • Disadvantage: cost of copying base pages • Advantage: " more robust to fragmentation"
Hardware support Skipped • Two proposals • Having multiple valid bits in each TLB entry • Would allow small superpages to contain missing base pages • Partial subblocking (Talluri and Hill) • Adding additional level of address translation in memory controller • Would "eliminate the contiguity requirement for superpages" (Fang et al.)
Allocation • Use • a reservation-based scheme for superpages • assumes a preferred superpage size for a given range of addresses • a buddy system to manage main memory • Think of scheme used to manage block fragments in Unix FFS
Preferred superpage size (I) • For fixed-size memory objects, picklargest aligned superpage that • Contains the faulting base page • Does not overlap with other superpages or tentative superpages • Does not extend over the boundaries of the object
Preferred superpage size (II) • For dynamically-size memory objects, picklargest aligned superpage that • Contains the faulting base page • Does not overlap with other superpages or tentative superpages • Does not exceed the current size of the object
Fragmentation control • Mostly managed by buddy allocator • Helped by page replacement daemon • Modified BSD daemon is made"contiguity-aware"
Promotion • Use incremental promotion • Wait until superpage is fully populated • Conservative approach
Demotion (I) • Incremental demotion • Required when • A base page of a superpage is expelled from main memory • Protection attributes of some base pages are changed
Demotion (II) • Speculative demotion • Could be done each time a superpage referenced bit is reset • When memory becomes scarce • Let system know which parts of a superpage are still in use
Handling dirty superpages (I) • Demote superpages as soon as they a base page modified • Otherwise would have to flush out whole superpage when it will be expelled from main memory • Because there is one single dirty bit per superpage
X X Handling dirty superpages (II) • A superpage has been modified • The whole superpage is dirty • We break up the superpage • All other pages remain clean
Multi-list reservation scheme • Maintains separate list for each superpage size supported by the hardware, but largest one • Each list contains reserved frames that could still accommodate a superpage of that size • Sorted by time of their most recent page frame allocation • Oldest entries are preempted first
Example • Area above contains 8 page frames reserved for a possible superpage • Three frames are allocated, five are free • Breaking the reservation will free space for • A superpage with 4 base pages or • Two superpages with two base page each
Population maps • One per memory object • Keep track of allocated pages within each object
Benchmarks • Thirty-five representative programs running on an Alpha processor • Four page sizes: 8 KB, 64 KB, 512 KB and 4 MB • Fully associative TLB with 128 entries for code and 128 for data • 512 MB of RAM • Separate 64 KB code and 64 KB data L1 caches • 4 MB unified L2 cache
Results (I) • Eighteen out of 35 benchmarks showed improvements over 5 percent • Ten out of 35 showed improvements over 25 percent • A single application showed a degradation of 1.5 percent • Allocator does not does not distinguish zeroed-out pages from other free pages
Results (II) • Different applications benefit most from different superpage sizes • Should let system choose among multiple page sizes • Contiguity-aware page replacement daemon can maintain enough contiguous regions • Huge penalty for not demoting dirty superpages • Overheads are small
CONCLUSION • It works and does not require any changes to existing hardware