150 likes | 169 Views
IBM Toronto Lab. Adaptive Self-Tuning Memory in DB2. Adam Storm, Christian Garcia-Arellano, Sam Lightstone – IBM Toronto Lab Yixin Diao, M. Surendra – T.J. Watson Research Center. VLDB 2006 | Seoul, Korea | September 13, 2006. The Memory Tuning Problem.
E N D
IBM Toronto Lab Adaptive Self-Tuning Memory in DB2 Adam Storm, Christian Garcia-Arellano, Sam Lightstone – IBM Toronto Lab Yixin Diao, M. Surendra – T.J. Watson Research Center VLDB 2006 | Seoul, Korea | September 13, 2006
The Memory Tuning Problem • Tuning the memory for an industrial RDBMS can be costly • Time consuming even for advanced users due to trial and error methods • Skill in memory tuning is difficult to find • Given a workload, determining memory requirements is difficult • Educated “trial and error” is state-of-the-art tuning method • Any static configuration may be sub-optimal for dynamic workloads • Effect on performance can be huge • Orders of magnitude
Previous Approaches to Memory Tuning • Wealth of previous approaches in the literature • Can be broadly divided into “Academic” and “Industrial” approaches • Academic Approaches • Sound theoretically, but can be difficult to implement • Hit rate estimations, assumptions on hit rate curve • In some cases, adding memory may create steps in the HR curve • Response time goals • Focused on one type of memory in isolation • Difficult to integrate several solutions into one comprehensive tuner • Industrial Approaches • Difficult to discern inner workings • Oracle • Unable to automatically determine value for total database memory usage • Can’t tune buffer pools which store pages larger than 4KB or trade memory between buffer pools and sort • Microsoft • Is able to automatically determine value for total database memory usage • Doesn’t appear to have sophisticated memory distribution algorithm
DB2’s Self Tuning Memory Manager (STMM) • Innovative cost-benefit analysis • Simulation technique vs. modeling • Tunes memory distribution and total memory usage • Simple greedy memory tuner • Control algorithms to avoid oscillations • Performs very well in experiments • For both OLTP and DSS
Cost-Benefit Analysis • How can memory requirements of one consumer be compared against another • Memory consumers can operate in drastically different ways • Buffer pools spend time doing I/O; Sort can uses I/O and CPU • Need a common metric • Metric chosen: simulatedseconds saved memory required (in 4 KB pages) • Allows for comparison between different memory consumers • Calculated differently for each consumer using simulation techniques • Technique ensures accuracy of data
Buffer Pool Benefit Analysis • Technique simulates adding pages to the buffer pool • As pages are removed from buffer pool they are placed in the Simulated Buffer Pool eXtension (SBPX) • SBPX only requires page descriptor and not page data so memory requirements are small • When miss occurs on page read, SBPX is consulted • If page is found in SBPX, physical read would have been avoided if the SBPX contained real pages • For each miss found in SBPX, cost of physical I/O is timed • Allows for detection of asymmetrical read times • (Cumulative time saved) / (Size of SBPX) is the benefit of adding pages to the buffer pool
SBPX Operation 9. Stop timer Disk SBPX Buffer Pool 3. Page request for 5. Check SBPX 1. Victimize Page (move to SBPX) 2. Load new page from disk 7. Victimize BP page (send to SBPX) 4. Check Bufferpool 6. Start timer 8. Load page from disk
Compiled SQL Cache Benefit • Similar to buffer pool approach but Simulated SQL Cache eXtension (SSCX) stores compressed compiled SQL packages • When a package is not found in SQL Cache, SSCX is consulted • If package is found in SSCX then cost of query compilation is timed • (Cumulative time saved) / (Size of SSCX) is the benefit of adding pages to the SQL Cache
What about cost? • Only discussed benefit calculations • How can cost be calculated? • For the caches (buffer pools, SQL cache) it is possible to simulate cost but the simulation method is cost prohibitive • Extra computation on a miss is dwarfed by read time – not so with hits • In these cases we approximate cost as the inverse of benefit • If growth by 10 pages saves 5 seconds, shrinking by 10 pages will incur and additional 5 seconds of computation time • Can be a crude approximation (i.e. when benefit is 0) but works well in practice • In some other cases (ex. Sort) we can inexpensively determine an accurate cost
Greedy Memory Tuner • Simply takes memory from consumers with low cost and gives it to consumers with high benefit • Is able to determine how much memory to take from the OS based on free memory usage statistics • Tries to maintain some free physical memory at all times • Uses more memory (or frees up memory) based on current free physical memory and database’s free memory target • See paper for more detail • Has sleep/wake periods • Automatically adapted at run-time
Control Algorithms • Help reduce tuning oscillations • Used for two purposes • To control the amount of memory moved in each interval • To determine how much time to sleep between tuning intervals • Controlling the amount of memory moved • Two different algorithms (MIMO and Oscillation Dampening) • MIMO (Multiple Input Multiple Output) • Fits historical cost-benefit data to a curve • Uses curve to estimate distance from optimal memory configuration • Sets resize amount for optimal configuration in ~20 intervals • Oscillation Dampening • Used before MIMO model can be generated • Uses resize patterns to detect the presence of oscillations • Determining sleep time • Much more detail in the paper
7000 avg = 6206 6000 Reduce 63% 5000 4000 Some Indexes Dropped avg = 2285 Time in seconds 3000 2000 avg = 959 1000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 Order of execution Experimental results – workload shift