1 / 48

Column-Stores vs. Row-Stores How Different are they Really?

Column-Stores vs. Row-Stores How Different are they Really?. Daniel J. Abadi , Samuel Madden, and Nabil Hachem , SIGMOD 2008. Presented By 傅志仁 (0256818). Contents. Row-store vs. Column-store introduction Column-store data model : C-Store Star Schema Benchmark : SSBM

hoai
Download Presentation

Column-Stores vs. Row-Stores How Different are they Really?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Column-Stores vs. Row-StoresHow Different are they Really? Daniel J. Abadi, Samuel Madden, and Nabil Hachem, SIGMOD 2008 Presented By 傅志仁(0256818)

  2. Contents • Row-store vs. Column-store introduction • Column-store data model : C-Store • Star Schema Benchmark : SSBM • Emulation of Column-store in Row-store • Column-store optimization • Conclusion

  3. Row vs. Column Data Storage • Row Storage - The data sequence consists of the data fields in one table row. • Column Storage - The data sequence consists of the entries in one table column.

  4. Row storage is most useful for Traditional OLTP queries, it is more advantageous to store all attributes side-by-side in row tables. • Column storage is most useful for OLAP queries because these queries get just a few attributes from every data entry.

  5. OLTP vs. OLAP

  6. OLTP (On-line Transaction Processing) is characterized by a large number of short on-line transactions (INSERT, UPDATE, DELETE). The main emphasis for OLTP systems is put on very fast query processing, maintaining data integrity in multi-access environments and an effectiveness measured by number of transactions per second. In OLTP database there is detailed and current data, and schema used to store transactional databases is the entity model (usually 3NF).  • OLAP (On-line Analytical Processing) is characterized by relatively low volume of transactions. Queries are often very complex and involve aggregations. For OLAP systems a response time is an effectiveness measure. OLAP applications are widely used by Data Mining techniques. In OLAP database there is aggregated, historical data, stored in multi-dimensional schemas (usually star schema). 

  7. Applications for column-stores • Data Warehousing • High end (clustering) • Mid end/Mass Market • Personal Analytics • Data Mining • E.g. Proximity • Google BigTable • RDF • Semantic web data management • Information retrieval • Terabyte TREC • Scientific datasets • SciDBinitiative • SLOAN Digital Sky Survey on MonetDB

  8. Column-store data model : C-Store • Standard relational logical data model • EMP(name, age, salary, dept) • DEPT(dname, floor) • Table – collection of projections • Projection – set of columns • Horizontally partitioned into segments with segment identifier

  9. To answer queries, projections are joined using Storage keys and join indexes • Storage Keys: • Within a segment, every data value of every column is associated with a unique Skey • Values from different columns with matching Skey belong to the same logical row

  10. Join Indexes • T1 and T2 are projections on T • M segments in T1 and N segments in T2 • Join Index from T1 to T2 is a table of the form: • (s: Segment ID in T2, k: Storage key in Segment s) • Each row in join index matches corresponding row in T1 • Join indexes are built such that T could be efficiently reconstructed from T1 and T2

  11. Construct EMP(name, age, salary) from EMP1 and EMP3 using join index on EMP3

  12. Why C-Store? • C-Store is a more recent column-oriented DBMS. It includes many of the same features as MonetDB/X100, as well as optimizations for direct operation on compressed data. • Like the other two systems, it shows that a column-store can dramatically outperform a row-store on warehouse workloads, but doesn’t carefully explore the design space of feasible row-store physical designs.

  13. Star Schema Benchmark • SSBM is a data warehousing benchmark derived from TPC-H • It consist of a single fact table LINE-ORDER • There are four dimension table. • CUSTOMER • PART • SUPPLIER • DATE • LINEORDER table consist of 60,000,000 tuples • SSBM consist of thirteen queries divided into four category

  14. Queries: The SSBM consists of thirteen queries divided into four categories, or “flights”: • Flight 1 contains 3 queries. Queries have a restriction on 1 dimension attribute, as well as the DISCOUNT and QUANTITY columns of the LINEORDER table. Queries measure the gain in revenue (the product of EXTENDEDPRICE and DISCOUNT) that would be achieved if various levels of discount were eliminated for various order quantities in a given year. The LINEORDER selectivities for the three queries are 1.9×10−2, 6.5 × 10−4, and 7.5 × 10−5, respectively. • Flight 2 contains 3 queries. Queries have a restriction on 2 dimension attributes and compute the revenue for particular product classes in particular regions, grouped by product class and year. The LINEORDER selectivities for the three queries are 8.0×10−3, 1.6×10−3, and 2.0×10−4, respectively.

  15. Flight 3 consists of 4 queries, with a restriction on 3 dimensions. Queries compute the revenue in a particular region over a time period, grouped by customer nation, supplier nation, and year. The LINEORDER selectivities for the four queries are 3.4 × 10−2, 1.4 × 10−3, 5.5 × 10−5, and 7.6 × 10−7 respectively. • Flight 4 consists of three queries. Queries restrict on three dimension columns, and compute profit (REVENUE - SUPPLYCOST) grouped by year, nation, and category for query 1; and for queries 2 and 3, region and category. The LINEORDER selectivitiesfor the three queries are 1.6×10−2, 4.5×10−3, and 9.1 × 10−5, respectively.

  16. Simulate a Column-Store inside a Row-Store

  17. Simulate a Column-Store inside a Row-Store

  18. Experiments • Star Schema Benchmark (SSBM) • Implemented by professional DBA • Original row-store plus 2 column-store simulations on same row-store product

  19. What’s Going On? Vertical Partitions • Vertical partitions in row-stores: • Work well when workload is known • ..and queries access disjoint sets of columns • See automated physical design • Do not work well as full-columns • TupleIDoverhead significant • Excessive joins

  20. What’s Going On? All Indexes Case • Tuple construction • Common type of query: • Result of lower part of query plan is a set of TIDs that passed all predicates • Need to extract SELECT attributes at these TIDs • BUT: index maps value to TID You really want to map TID to value (i.e., a vertical partition) • Tuple construction is SLOW

  21. Third Approach : Materialized Views • Process: • Create ‘optimal' set of MVs for given query workload • Objective: • Provide just the required data • Avoid overheads • Performs better • Expected to perform better than other two approach • Problems: • Practical only in limited situation • Require knowledge of query workloads in advance

  22. Optimization for column oriented database • Compression • Late Materialization • Block Iteration • Invisible Join

  23. Compression • Low information entropy (high data value locality) leads to High compression ratio • Advantage • Disk Space is saved • Less I/O • CPU cost decrease if we can perform operation without decompressing • Light weight compression schemes do better

  24. If data is sorted on one column that column will be super-compressible in row store • eg. Run length encoding

  25. Late Materialization • Most query results entity-at-a-time not column-at-a-time • So at some point of time multiple column must be combined • One simple approach is to join the columns relevant for a particular query • But further performance can be improve using late-materialization

  26. Delay Tuple Construction • Might avoid constructing it altogether • Intermediate position lists might need to be constructed • Eg: SELECT R.a FROM R WHERE R.c = 5 AND R.b = 10 • Output of each predicate is a bit string • Perform Bitwise AND • Use final position list to extract R.a

  27. Advantages • Unnecessary construction of tuple is avoided • Direct operation on compressed data • Cache performance is improved (PAX)

  28. Block Iteration • Operators operate on blocks of tuples at once • Iterate over blocks rather than tuples • Like batch processing • If column is fixed width, it can be operated as an array • Minimizes per-tuple overhead • Exploits potential for parallelism • Can be applied even in Row stores – IBM DB2 implements it

  29. Invisible Join • Queries over data warehouse (particularly modeled with star schema) often have following structure • Restrict set of tuple in the fact table using selection predicates on dimension table • Perform some aggregation on the restricted fact table • Often grouping by other dimension table attribute • For each selection predicate and for each aggregate grouping join between fact table and dimension table is required

  30. This query finds the total revenue from customers who live in Asia and who purchase a product supplied by an Asian supplier between the years 1992 and 1997 grouped by each unique combination of the nation of the customer, the nation of the supplier, and the year of the transaction.

  31. Traditional plan for this type of query is to pipeline join in order of predicate selectivity • Alternate plan is late materialized join technique • But both have disadvantages • Traditional plan lacks all the advantages described previously of late materialization • In the late materialized join technique group by columns need to be extracted in out-of-position order

  32. Invisible join is a late materialized join but minimize the values that need to be extracted out of order • Invisible join • Rewrite joins into predicates on the foreign key columns in the fact table • These predicates evaluated either by hash-lookup • Or by between-predicate rewriting

  33. Between-Predicate rewriting • Use of range predicates instead of hash lookup in phase 1 • Useful if contiguous set of keys are valid after applying a predicate • Dictionary encoding for key reassignment if not contiguous • Query optimizer is not altered. Predicate is rewritten at runtime

  34. Experiments • Compare the row-oriented approaches to the performance of C-Store on the SSBM, with the goal of answering four key questions: • How do the different attempts to emulate a column store in a row-store compare to the baseline performance of C-Store? • Is it possible for an unmodified row-store to obtain the benefits of column-oriented design? • Of the specific optimizations proposed for column-stores (compression, late materialization, and block processing), which are the most significant? • How does the cost of performing star schema joins in columnstores using the invisible join technique compare with executing queries on a denormalized fact table where the join has been pre-executed?

  35. Column-store performance - Results • Block processing improves the performance by a factor of 5% to 50% depending on the compression. • Compression improves the performance by almost a factor of 2. • Late materialization improves performance by almost a factor of 3 because of the selective predicates. • Invisible join improves the performance by 50-75%. • The most significant optimizations are compression and late materialization. • After all the optimizations are removed, the column store acts just like a row store. • Column-stores are faster than simulated-column stores.

  36. Effect on C-Store performance

  37. Conclusion • To emulate column store in row store, techniques like • Vertical portioning • Index only plan does not yield good performance • High per-tuple overheads, high tuple reconstruction cost are the reason • Where in column store • Late materialization • Compression • Block iteration • Invisible join are the reason for good performance

  38. Successful emulation column-store in row-store require some important system improvments • Virtual record-ids • Reduced tuple over head • Fast merge join • Run length encoding across multiple tuples • Operating directly on compressed data • Block processing • Invisible join • Late materialization

  39. Can row-stores simulate column-stores? • Might be possible, BUT: • Need better support for vertical partitioning at the storage layer • Need support for column-specific optimizations at the executer level • Full integration: buffer pool, transaction manager, ..

  40. References [1]D. Abadi, et al., Column-Stores vs. Row-Stores: How Different Are They Really?, SIGMOD, 2008 [2]Wikipedia, http://en.wikipedia.org/wiki/Column-oriented_DBMS [3] https://cookbook.experiencesaphana.com/crm/what-is-crm-on-hana/technology-innovation/row-vs-column-based/ [4] http://datawarehouse4u.info/OLTP-vs-OLAP.html [5] http://nms.csail.mit.edu/~stavros/pubs/tutorial2009-column_stores.pdf [6] http://db.lcs.mit.edu/projects/cstore/abadisigmod06.pdf [7] Mike Stonebraker, et al., C-Store: A Column Oriented DBMS VLDB, pages 553-564, 2005. [8] http://www.cse.iitb.ac.in/dbms/Data/Courses/CS632/Talks/Column-vs-Row.ppt [9] https://www.cs.helsinki.fi/webfm_send/1017/Sezin_presentation.pptx

More Related