1 / 43

XJoin : Getting Fast Answers From Slow and Bursty Networks

XJoin : Getting Fast Answers From Slow and Bursty Networks. T. Urhan M. J. Franklin IACS, CSD, University of Maryland. Presented by: Abdelmounaam Rezgui. CS-TR-3994. The Problem. How to improve the interactive performance of queries over widely distributed data sources ?. 2. Source B.

sun
Download Presentation

XJoin : Getting Fast Answers From Slow and Bursty Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. XJoin: Getting Fast Answers From Slow and Bursty Networks T. UrhanM. J. Franklin IACS, CSD, University of Maryland Presented by: Abdelmounaam Rezgui CS-TR-3994

  2. The Problem How to improve the interactive performance of queries over widely distributed data sources ? 2

  3. Source B Source A The Problem S R Tuples Tuples 3

  4. Remote sources Intermediate sites Communication links Overloading Congestion Failures Why is the response-time unpredictable ?  { are vulnerable to Significant and unpredictable delays Unresponsive and unusable systems 4

  5. Different classes of delays • Initial delay: a longer than expected wait to receive the first tuple. • Slow delivery: data arrive at a fairly constant but slower than expected rate. • Bursty arrival: bursts of data followed by long periods of no arrivals. 5

  6. Some Join variants • Nested Loops Join • Block Nested Loops Join • Index Nested Loops Join • Sort-Merge Join • Classic Hash Join • Simple Hash Join • Grace Hash Join • Hybrid Hash Join (HHJ) • TID Hash Join • Symmetric Hash Join (SHJ) • XJoin 6

  7. Query Scrambling reacts to data delivery pbs. by on-the-fly rescheduling of query operators and restructuring of the query execution plan. • improve the response time for the entire query • may slow down the return of some initial results To be presented on November 22, 1999 7

  8. Traditional query processing techniques • Reduce the memory requirements • Reduce Disk I/O • Delivery of the entire query result (on-line users would like to receive initial results asap.) • Slow and bursty delivery of data from remote sources can stall query execution. 8

  9. XJoin: Fundamental principles • improves the interactive performance by producing results incrementally (as they become available) • allows progress to be made even when one or more sources experience delays (delays are exploited to produce more tuples earlier) 9

  10. XJoin : The key idea When inputs are delayed run a background processing on the previously received results 10

  11. XJoin : The challenges • Managing the flow of tuples between memory and secondary storage. • Controlling the background processing. • Full answer (all the tuples are produced). • No duplicate tuples are generated. 11

  12. SHJoin (Symmetric Hash Join) Matching Hash table 1 Hash table 2 Source 1 Source 2 12

  13. SHJoin requires: Hash tables for both of its inputs be memory resident. Unacceptable for complex queries. 13

  14. XJoin • each input is partitioned into a number of partitions based on a hash function. • each partition i of source A, PiA : PiA = MPiA DPiA MPiA DPiA =  Partioning: 14

  15. Tuple B Tuple A hash(Tuple A) = 1 hash(Tuple B) = n Memory-resident partitions of source A Memory-resident partitions of source B 1 n 1 k n . . . . . . . . . . . . . . . . . . M E M O R Y flush D I S K 1 . . . n 1 k n . . . . . . SOURCE-A Disk-residentpartitions of source A Disk-residentpartitions of source B SOURCE-B 15

  16. Tuple A Tuple B SOURCE-A SOURCE-B Stage 1: Memory-to-memory Joins Output Partitions of source A Partitions of source B j i j i . . . . . . . . . . . . . . . . . . M E M O R Y insert probe probe insert hash(record A) = i hash(record B) = j 16

  17. i Stage 2: Disk-to-memory Joins Output Partitions of source A Partitions of source B i . . . . . . . . . . . . . . . . . . . . . . . . . . . . M E M O R Y DPiA MPiB i D I S K i . . . . . . . . . . . . . . . . . . . . Partitions of source A Partitions of source B 17

  18. Stage 3: Clean-up • Stage 1 fails to join tuples that were not in the memory at the same time. • Stage 2 fails to join two tuples if one of them is not in the memory when the other is brought from the disk. • Stage 3 joins all the partitions (memory-resident and disk-resident portions) of the two sources. 18

  19. Tuple X ATS DTS Handling duplicates • Counter 51 • Timestamps Tuple X • Example Tuple X 99 235 19

  20. ATS ATS DTS DTS Tuple A 102 234 Overlapping Tuple B1 Tuple B2 348 178 198 601 • Tuples joined in the first stage Detecting tuples joined in the 1st stage Tuple A 102 234 Non-Overlapping • Tuples not joined in the first stage 20

  21. 300 800 250 20 100 340 550 300 700 900 Detecting tuples joined in the 2nd stage DTSlast ATS DTS ProbeTS Tuple A 100 200 Overlap Tuple B 500 600 ATS DTS History list for the corresponding partitions 21

  22. Optimization 1: Adding a cache • Stage 2 joins DPiA and MPiB • Tuples of DPiA are discarded after use. The idea: retain some tuples of DPiA (cached) Could be used by a subsequent run of stage 2 joining DPiB and MPiA 22

  23. . . . . . . i i . . . . . . . . . . . . i i . . . . . . Output Output Output Partitions of Source A Partitions of Source B Partitions of Source A Partitions of Source B i i CACHE CACHE MEMORY probe insert probe probe . . . i . . . . . . i . . . . . . i . . . . . . i . . . DISK Partitions of Source A Partitions of Source B First run of stage 2 Second run of stage 2 23

  24. Optimization 2: Controlling Stage 2 • Overhead incured by Stage 2 is hidden only when both inputs experience delays  • Reduce the aggressiveness of Stage 2  • Dynamic activation threshold (e. g., 0.01 0.02) 24

  25. Experiment Environment PREDATOR, an Object-Relational DBMS • Xjoin operator added. • Query optimizer extended to: • account for XJoin. • provide some of the statistics and calculations required by XJoin. 25

  26. Arrival Patterns 2 have been chosen: Fig. 1: Bursty arrival.Avg. Rate: 23.5 KB/s Fig. 2: Fast arrival.Avg. Rate: 129.6 KB/s 26

  27. 100 000 tuple Wisconsin benchmark relations. • each tuple: 288 bytes • Unique unclustered integer join attribute • Result cardinality: 100 000. • Sun Ultra 5 WS: • Solaris 2.6 • 128 MB of real memory • Disk space (approx.): 4 GB • Disk & Memory pages: 8 KB • Storage manager buffer size: 800 KB 27

  28. Results Experiment 1 Basic performance of XJoin • Memory space allocated to the join operators: 3 MB. • Input relations: 28.8 MB each • Activation threshold (of stage 2): 0.01 • 4 delay scenarios 28

  29. 29

  30. Case 1: Slow NetworkBoth sources are slow • XJoin improves the delivery time of initial answers. • The reactive background processing is an effective solution to exploit delays. • The use of cache can further improve performance. 30

  31. Case 2: Mixed NetworkSlow build/Fast probeFast build/Slow probe • XJoin variants perform better. • (/Case 1) XJoins with the 2nd Stage perform better. 31

  32. Case 3: Fast NetworkBoth sources are fast • XJoin variants deliver initial results earlier. • HHJ delivers the 2nd half of the result faster than XJoin-NoCache and XJoin. • XJoin-No2nd delivers the last 60 % of the result faster than the other XJoin variants. 32

  33. Experiment 2 : Controlling the 2nd stage Fig. 7: Slow relations. Fig. 8: Fast relations. • improves inter. perf. with slow and bursty data sources. • degrades the overall response-time in the case of fast/reliable sources. 33

  34. • Stage 2 should be employed less aggressively (less often). • A dynamic activation threshold. 34

  35. XJoin-Dyn • aggressive in the early stages of the query. • becomes less aggressive as more of the results are produced. • starts with a low activation treshold (0.01) and then linearly increases it to 0.02. 35

  36. Experiment 3 : the effect of memory size • Recall ! The prime motivation for designing XJoin was the huge memory requirements of the symmetric hash join. • XJoin reduces the memory requirements but adds overhead (disk I/O & duplicate detection). 36

  37. Size of the input relations: 8.6 MB. • 3 different memory allocations: • - 3 MB (neither of the inputs fit into the memory) - 10 MB (one input fits into the memory) - 20 MB (both inputs fit into the memory) Fig. 9: Slow Network, Varying memory Fig. 10: Fast Network, Varying memory 37

  38. XJoin performs better both in: - interactive performance - completion time. 38

  39. Experiment 4 : impact of query complexity • 2 to 6 relations (1 to 5 joins) • 3 MB to each join operator Fig. 11. Tuple production rates of XJoin and HHJ (secs)- Slow Network 39

  40. Experiment 4 : impact of query complexity Fig. 12. Tuple production rates of XJoin and HHJ (secs)- Fast Network XJoin delivers the initial results faster 40

  41. Conclusions XJoinAn effective query processing technique for providing fast query responses to users in the presence of slow and bursty remote sources. 41

  42. lowers the memory requirements (partitioning) • improves the interactive performance. • reacts to delays and takes advantage of silent periods to produce more tuples faster. 42

  43. Perspectives What de you think aboutPJoin A Multithreaded Parallel XJoin Using the Cilk Language? 43

More Related