1 / 10

Query Result Caching

Query Result Caching. Prasan Roy, Krithi Ramamritham, S. Seshadri, S. Sudarshan Pradeep Shenoy, Jinesh Vora. Model. Predictive Caching - use history Query results/ intermediate Single user stream - very similar queries Global sequence of queries - long term patterns

osborn
Download Presentation

Query Result Caching

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Query Result Caching Prasan Roy, Krithi Ramamritham, S. Seshadri, S. Sudarshan Pradeep Shenoy, Jinesh Vora

  2. Model • Predictive Caching - use history • Query results/ intermediate • Single user stream - very similar queries • Global sequence of queries - long term patterns • Leverages off MQO (P. Roy, et.al.)

  3. Issues • Matching and reuse of cached items • Choice of items to cache

  4. Matching • Integrated into optimization • Hash-based storage of DAGs and plans • New plans unified with old identical plans • Cache items chosen in cost based manner

  5. Review of MQO • Basic idea • Sharable nodes considered for caching • Benefit of all subsets computed, choose best set • Greedy heuristic: take highest benefit node at each step • Several optimizations included

  6. Adaptation • Characterizing the query workload • Weighted set of queries - frequency based • Candidates for caching is varied

  7. Local Commonality • Use small window • Candidate set: current cache contents + new execution plan • Make greedy choice on this set • Re-check if old nodes are relevant (cleanup) • Any nodes in current plan worth caching? (scavenge) • Metric: benefit to representative set.

  8. Disadvantage • DAG is small - no long term patterns • Candidate set is small - only local minimum • similar to "quick-and-dirty" method Volcano-RU

  9. Global Commonality • Dynamic "view selection" • Large DAG, full-scale MQO • Candidate set includes all sharable nodes • Extended-predictive: no immediate caching • compute and materialize during slack time • cache on first use

  10. Status • In Progress!

More Related