1 / 72

Carnegie Mellon Univ. Dept. of Computer Science 15-415/615 – DB Applications

This article provides an introduction to data warehousing and data mining, focusing on the concepts of data cubes, OLAP, and decision trees. It discusses the process of collecting and organizing data in a data warehouse and explores the use of decision trees for supervised learning. Other topics covered include unsupervised learning and association rules.

bennyk
Download Presentation

Carnegie Mellon Univ. Dept. of Computer Science 15-415/615 – DB Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Carnegie Mellon Univ.Dept. of Computer Science15-415/615 – DB Applications Data Warehousing / Data Mining (R&G, ch 25 and 26)

  2. Data mining - detailed outline • Problem • Getting the data: Data Warehouses, DataCubes, OLAP • Supervised learning: decision trees • Unsupervised learning • association rules • (clustering) CMU SCS 15-415/615

  3. PGH NY sales(p-id, c-id, date, $price) ??? customers( c-id, age, income, ...) SF Problem Given: multiple data sources Find: patterns (classifiers, rules, clusters, outliers...) CMU SCS 15-415/615

  4. Data Ware-housing First step: collect the data, in a single place (= Data Warehouse) How? How often? How about discrepancies / non-homegeneities? CMU SCS 15-415/615

  5. Data Ware-housing First step: collect the data, in a single place (= Data Warehouse) How? A: Triggers/Materialized views How often? A: [Art!] How about discrepancies / non-homegeneities? A: Wrappers/Mediators CMU SCS 15-415/615

  6. Data Ware-housing Step 2: collect counts. (DataCubes/OLAP) Eg.: CMU SCS 15-415/615

  7. OLAP Problem: “is it true that shirts in large sizes sell better in dark colors?” sales ... CMU SCS 15-415/615

  8. f size color color; size DataCubes ‘color’, ‘size’: DIMENSIONS ‘count’: MEASURE CMU SCS 15-415/615

  9. DataCubes ‘color’, ‘size’: DIMENSIONS ‘count’: MEASURE f size color color; size CMU SCS 15-415/615

  10. DataCubes ‘color’, ‘size’: DIMENSIONS ‘count’: MEASURE f size color color; size CMU SCS 15-415/615

  11. DataCubes ‘color’, ‘size’: DIMENSIONS ‘count’: MEASURE f size color color; size CMU SCS 15-415/615

  12. DataCubes ‘color’, ‘size’: DIMENSIONS ‘count’: MEASURE f size color color; size CMU SCS 15-415/615

  13. DataCubes ‘color’, ‘size’: DIMENSIONS ‘count’: MEASURE f size color color; size DataCube CMU SCS 15-415/615

  14. DataCubes SQL query to generate DataCube: • Naively (and painfully:) select size, color, count(*) from sales where p-id = ‘shirt’ group by size, color select size, count(*) from sales where p-id = ‘shirt’ group by size ... CMU SCS 15-415/615

  15. DataCubes SQL query to generate DataCube: • with ‘cube by’ keyword: select size, color, count(*) from sales where p-id = ‘shirt’ cube by size, color CMU SCS 15-415/615

  16. DataCubes DataCube issues: Q1: How to store them (and/or materialize portions on demand) Q2: Which operations to allow CMU SCS 15-415/615

  17. DataCubes DataCube issues: Q1: How to store them (and/or materialize portions on demand) A: ROLAP/MOLAP Q2: Which operations to allow A: roll-up, drill down, slice, dice [More details: book by Han+Kamber] CMU SCS 15-415/615

  18. DataCubes Q1: How to store a dataCube? CMU SCS 15-415/615

  19. DataCubes Q1: How to store a dataCube? A1: Relational (R-OLAP) CMU SCS 15-415/615

  20. DataCubes Q1: How to store a dataCube? A2: Multi-dimensional (M-OLAP) A3: Hybrid (H-OLAP) CMU SCS 15-415/615

  21. DataCubes Pros/Cons: ROLAP strong points: (DSS, Metacube) CMU SCS 15-415/615

  22. DataCubes Pros/Cons: ROLAP strong points: (DSS, Metacube) • use existing RDBMS technology • scale up better with dimensionality CMU SCS 15-415/615

  23. DataCubes Pros/Cons: MOLAP strong points: (EssBase/hyperion.com) • faster indexing (careful with: high-dimensionality; sparseness) HOLAP: (MS SQL server OLAP services) • detail data in ROLAP; summaries in MOLAP CMU SCS 15-415/615

  24. DataCubes Q1: How to store a dataCube Q2: What operations should we support? CMU SCS 15-415/615

  25. f size color color; size DataCubes Q2: What operations should we support? CMU SCS 15-415/615

  26. DataCubes Q2: What operations should we support? Roll-up f size color color; size CMU SCS 15-415/615

  27. DataCubes Q2: What operations should we support? Drill-down f size color color; size CMU SCS 15-415/615

  28. DataCubes Q2: What operations should we support? Slice f size color color; size CMU SCS 15-415/615

  29. DataCubes Q2: What operations should we support? Dice f size color color; size CMU SCS 15-415/615

  30. DataCubes Q2: What operations should we support? • Roll-up • Drill-down • Slice • Dice • (Pivot/rotate; drill-across; drill-through • top N • moving averages, etc) CMU SCS 15-415/615

  31. D/W - OLAP - Conclusions • D/W: copy (summarized) data + analyze • OLAP - concepts: • DataCube • R/M/H-OLAP servers • ‘dimensions’; ‘measures’ CMU SCS 15-415/615

  32. Outline • Problem • Getting the data: Data Warehouses, DataCubes, OLAP • Supervised learning: decision trees • Unsupervised learning • association rules • (clustering) CMU SCS 15-415/615

  33. Decision trees - Problem ?? CMU SCS 15-415/615

  34. num. attr#2 (eg., chol-level) - - + + + - + - + - + - + num. attr#1 (eg., ‘age’) Decision trees • Pictorially, we have CMU SCS 15-415/615

  35. Decision trees • and we want to label ‘?’ ? num. attr#2 (eg., chol-level) - - + + + - + - + - + - + num. attr#1 (eg., ‘age’) CMU SCS 15-415/615

  36. Decision trees • so we build a decision tree: ? num. attr#2 (eg., chol-level) - - + + + 40 - + - + - + - + 50 num. attr#1 (eg., ‘age’) CMU SCS 15-415/615

  37. Decision trees • so we build a decision tree: age<50 N Y chol. <40 + Y N - ... CMU SCS 15-415/615

  38. Outline • Problem • Getting the data: Data Warehouses, DataCubes, OLAP • Supervised learning: decision trees • problem • approach • scalability enhancements • Unsupervised learning • association rules • (clustering) CMU SCS 15-415/615

  39. Decision trees • Typically, two steps: • tree building • tree pruning (for over-training/over-fitting) CMU SCS 15-415/615

  40. num. attr#2 (eg., chol-level) - - + + + - + - + - + - + num. attr#1 (eg., ‘age’) Tree building • How? CMU SCS 15-415/615

  41. Tree building • How? • A: Partition, recursively - pseudocode: Partition ( Dataset S) if all points in S have same label then return evaluate splits along each attribute A pick best split, to divide S into S1 and S2 Partition(S1); Partition(S2) CMU SCS 15-415/615

  42. Not In Exam=N.I.E. Tree building • Q1: how to introduce splits along attribute Ai • Q2: how to evaluate a split? CMU SCS 15-415/615

  43. N.I.E. Tree building • Q1: how to introduce splits along attribute Ai • A1: • for num. attributes: • binary split, or • multiple split • for categorical attributes: • compute all subsets (expensive!), or • use a greedy algo CMU SCS 15-415/615

  44. N.I.E. Tree building • Q1: how to introduce splits along attribute Ai • Q2: how to evaluate a split? CMU SCS 15-415/615

  45. N.I.E. Tree building • Q1: how to introduce splits along attribute Ai • Q2: how to evaluate a split? • A: by how close to uniform each subset is - ie., we need a measure of uniformity: CMU SCS 15-415/615

  46. N.I.E. Tree building entropy: H(p+, p-) Any other measure? 1 0 0.5 0 1 p+ CMU SCS 15-415/615

  47. N.I.E. 1 0 0.5 0 1 p+ Tree building entropy: H(p+, p-) ‘gini’ index: 1-p+2 - p-2 1 0 0.5 0 1 p+ CMU SCS 15-415/615

  48. N.I.E. Tree building entropy: H(p+, p-) ‘gini’ index: 1-p+2 - p-2 (How about multiple labels?) CMU SCS 15-415/615

  49. N.I.E. Tree building Intuition: • entropy: #bits to encode the class label • gini: classification error, if we randomly guess ‘+’ with prob. p+ CMU SCS 15-415/615

  50. N.I.E. num. attr#2 (eg., chol-level) - - + + + - + - + - + - + num. attr#1 (eg., ‘age’) Tree building Thus, we choose the split that reduces entropy/classification-error the most: Eg.: CMU SCS 15-415/615

More Related