250 likes | 278 Views
Energy Efficient Prefetching and Caching. Athanasios E. Papathanasiou and Michael L. Scott. University of Rochester Proceedings of 2004 USENIX Annual Technical Conference Presenter: Ningfang Mi July 01, 2005. Outline . Motivation New Energy-Aware Prefetching Algorithm Basic idea
E N D
Energy Efficient Prefetching and Caching Athanasios E. Papathanasiou and Michael L. Scott. University of Rochester Proceedings of 2004 USENIX Annual Technical Conference Presenter: Ningfang Mi July 01, 2005
Outline • Motivation • New Energy-Aware Prefetching Algorithm • Basic idea • Key challenges • Implementation in Linux kernel • Evaluation results • Conclusion
Motivation • Prefetching and caching in modern OS • A smooth access pattern improves performance • Increase throughput • Decrease latency • What about energy efficiency? • A smooth access pattern results in relatively short intervals of idle times • Idle times too short to save energy • Spin-up time is not free
New Design Goal • Maximize energy efficiency • Create a bursty access pattern • Maximize idle interval length • Maximize utilization when disk is active • Not degrade performance • Focus on hard disks
G D A B C F E F E C D A B Background(1) -- Fetch-on-Demand 66 time units 6 idle time intervals with 10 times units each Stream: A B C D E F G …. Access: 10 times units Fetch: 1 time unit idle idle idle idle idle idle 66 0 D A E B F C
Background (2) -- Traditional Prefetching (Cao’95) • Aim -- minimize execution time • Four rules • Optimal Prefetching • Prefetch the next referenced block not in cache • Optimal Replacement • Discard the block whose next reference is farthest • Do no harm • Never replace A with B when A be referenced before B • First Opportunity • Never do prefect-and-replace later What to prefetch or discard? When to prefetch?
H F I C E A D G B C G E D F B A Background (2) -- Traditional Prefetching (Cao’95) 61 time units 5 idle time intervals with 9 times units each 1 idle time intervals of 8 Stream: A B C D E F G …. Access: 10 times units Fetch: 1 time unit idle Idle Idle Idle Idle Idle 0 61 D G A H B E F C
Background (3) -- Energy-conscious Prefetching • Replace “First opportunity” with • Maximize Disk Utilization • Always initiate a prefetch when there are blocks available for replacement • Respect Idle Time • Never interrupt a period of inactivity with a prefetch operation unless unless prefetching is urgent
D A E B F C B D C A F G E C D E F A B Background (3) -- Energy-conscious Prefetching 61 time units 1 idle time intervals of 27 1 idle time intervals of 28 Stream: A B C D E F G …. Access: 10 times units Fetch: 1 time unit Idle 33-60 Idle 4-30 61 0
Energy-Aware Prefetching -- Basic Idea • Design guideline • Fetch as many blocks as possible when the disk is active • Not prefetch until the latest opportunity when the disk is idle. • Epoch-Based Extensions to Linux Memory Management System • Divide the time into epochs • Each epoch: an active phase and an idle phase
Key Challenges When to prefech? What to prefech? How much to prefetch?
Estimate memory size for prefetching • Free the required amount of memory • Prefetch new data active idle Key Challenges (1) -- When to Prefetch? In an epoch: • predict future accesses • do prefetching • predict idle period • if possible, go to sleep • wake up for demand miss or prefetching or low on memory
Track file activity Key Challenges (2) -- What to Prefetch? • Prediction is based on hints. • Hint interface: • File Specifier X Pattern Specifier +Time Information • New applications submit hints to OS using new system calls • Monitor Daemon • Provide hints automatically on behalf of applications Access Analysis Hint Generation
Key Challenges (3) -- How much to Prefetch? • Decide # of pages be freed in active phase • The reserved memory be large enough to contain all predicted data accesses. • Prefetching not cause the eviction of pages that are going to be accessed sooner than the prefetched data • First miss during idle phase • Compulsory Miss: • A miss on a page without prior information • Prefetch Miss: • A miss on a page with a prediction (hint) • Eviction Miss: • A miss on a page be evicted for prefetching
Implementation • In the Linux kernel 2.4.20 • Hinted files • Prefetch thread • Prefetch cache • Eviction Cache • Handling write activity • Power management policy
Hinted Files • Disclosed by: • Monitor daemon or applications • Kernel for long sequential file accesses • Maintained in a doubly linked list • Sorted by estimated first access time
Prefetch Thread • Coordinating across applications • A lack of coordination limits idle interval length • Issuing read/write from concurrently running applications during the same small window of times • Write: the update daemon • Page-out: the swap daemon • Prefetch/read: the prefetch daemon • Generate prefetch requests for all running applications • Coordinating three daemons I/O activity
Prefetch Cache & Eviction Cache • Extend LRU with Prefetch Cache • Contain pages requested by the prefetch daemon • Timestamp: when the page will be accessed • When a page is referenced or its timestamp is exceeded, move it to the standard LRU list • Eviction Cache: Stores eviction history • Metadata of recently evicted pages • Eviction number: # of pages that have been evicted • When an eviction miss occurs page’s eviction number - epoch’s starting eviction number => # of pages that were evicted without causing an eviction miss => Estimate prefetch cache size for next epoch
Handle Write Activity • In the original kernel, update daemon runs every 5 sec and flushes dirty buffers > 30 sec • => the idle interval <= 5 seconds • Now, a modified update daemon flushes dirty buffers once per minute. • A flag in the extended open system call indicates dirty buffers can be delayed until • the corresponding file is closed • the process opening the file exits • The monitor daemon provides guideline to OS • “flush-on-close” or “flush-on-exit”
Power Management Policy • Power management policy based on the prediction of the next idle length • Set the disk to Standby within 1 sec after idle if predicted length > Standby breakeven time • The problem of mispredictions • Actual idle time < Standby breakeven time • Return to a dynamic-threshold spin-down policy • Ignore predictions until the accuracy increases • Avoid harmful spin-down operations
Evaluation • Used Hitachi hard disk • three low power modes • Workloads: • MPEG playback (MPEG) • MP3 encoding and MPEG playback (Concurrent) • kernel compilation (Make) • speech recognition system (SPHINX) • Metrics • Length of idle periods: make longer • Energy savings • Slowdown: minimize performance penalties
Results (1) -- Idle Time Intervals make MPGE 80% >200 s concurrent SPHINX Standard kernel, 100% idle time less than 1 second, independent of memory size Bursty system, larger memory sizes lead to longer idle interval lengths
78.5% 77.4% 66.6% 62.5% Results (2) -- Energy Savings • Linux kernel • Base case (64MB) • Independent on memory size • Bursty system • Depend on memory size • Significant energy saving when mem size is large
<2.8% 4.8% <1.6% 15% <5% Results (3) -- Execution Time Successfully avoid delay caused by disk spin-up ops An increased cache hit ratio improves the performance Increased paging and disk congestion Increased cache hit ratio speeds the time
Conclusion • Energy-conscious prefetching algorithm • Maximize idle interval length • Maximize energy efficiency • Minimize performance penalties • Experimental results • Increase the length of idle intervals • Save 60-80% disk energy • USENIX'04 Best Paper Award • http://www.cs.rochester.edu/u/papathan/research/BurstyFS