260 likes | 427 Views
Part I: Introductory Materials Introduction to Data Mining. Dr. Nagiza F. Samatova Department of Computer Science North Carolina State University and Computer Science and Mathematics Division Oak Ridge National Laboratory. What is common among all of them?. Data.
E N D
Part I: Introductory MaterialsIntroduction to Data Mining Dr. Nagiza F. Samatova Department of Computer Science North Carolina State University and Computer Science and Mathematics Division Oak Ridge National Laboratory
Who are the data producers? What data?Application Data • Application Category: Finance • Producer: Wall Street • Data: stocks, stock prices, stock purchases,… • Application Category: Academia • Producer: NCSU • Data: students admission data (name, DOB, GRE scores, transcripts, GPA, university/school attended, recommendation letters, personal statement, etc.
Application Categories • Finance (e.g., banks) • Entertainment (e.g., games) • Science (e.g., weather forecasting) • Medicine (e.g., disease diagnostics) • Cybersecurity (e.g., terrorists, identity theft) • Commerce (e.g., e-Commerce) • …
What questions to ask about the data?DataQuestions • Academia:NCSU:Admission data • Is there any correlation between the students’ GRE scores and their successful completion of a PhD program? • What are the groups of students that share common academic performance? • Are there any admitted students who would stand out as an anomaly? What type of anomaly is that? • If the student majors in Physics, what other major is he/she likely double-major?
Questions by Types? • Correlation, similarity, comparison,… • Association, causality, co-occurrence,… • Grouping, clustering,… • Categorization, classification,… • Frequency or rarity of occurrence,… • Anomalous or normal objects, events, behaviors, • Forecasting: future classes, future activity,… • …
What information we need to answer?QuestionsData Objects and Object Features • Academia:NCSU:Admission data • Objects: Students • Object’s Features=Variables=Attributes=Dimensions & Types • Name:String (e.g., Name=Neil Shah) • GPA:Numeric (e.g., GPA=5.0) • Recommendation:Text (e.g., … the top 2% in my career…) • Etc.
How to compare two objects?Data Object Object Pairs • Academia:NCSU:Admission data • Objects: Students • Based on a single feature: • Similar GPA • The same first letter in the last name • Based on a set of features: • Similar academic records (GPA, GRE, etc.) • Similar demographic records • Can you compute a numerical value for your similarity measure used for comparison? Why or Why not?
How to represent data mathematically?Data Object & its Features Data Model • What mathematical objects have you studied? • Scalar • Points • Vectors • Vector spaces • Matrices • Sets • Graphs, networks (maybe) • Tensors (maybe) • Time series (maybe) • Topological manifolds (maybe) • … 9
Data object as vector with components… • Vector components: • Features, or • Attributes, or • Dimensions City=(Latitude, Longitude)--2-dimensional object Raleigh=(35.46, 78.39) Boston=(42.21, 71.5) • Proximity(Raleigh, Boston)=? • Geodesic distance • Euclidean distance • Length of the interstate route
A set of data objects as vector spaces 3-dimensional vector space Altitude Moscow Raleigh Latitude Longitude Mining such data ~ studying vector spaces
Multi-dimensional vectors… • Vector components: • Features, or • Attributes, or • Dimensions Student=(Name, GPA, Weight, Height, Income in K, …) - mutli-dimensional S1=(John Smith, 5.0, 180, 6.0, 200) Proximity(S1, S2)=? S2=(Jane Doe, 3.0, 140, 5.4, 70) • How to compare when vector components are of heterogeneous type, or different scales? • How to show the results of the comparison?
as matrices… Example: A collection of text documents on the Web Parsed Documents Original Documents t-d term-document matrix Terms=Features=Dimensions Mining such data ~ studying matrices
or as trees t-d term-document matrix document president government party election political elected national districts held district independence vice minister parties D2 D3 Is D2 similar to D3? What if there are 10,000 terms? terms population area climate city miles province land topography total season 1999 square rate economy million products 1996 growth copra economic 1997 food scale exports rice fish Mining such data ~ studying trees
0r as networks, or graphs w/ nodes & links Nodes=Documents Links=Document similarity (e.g., if document references another document ) president government party election political elected national districts held district independence vice minister parties population area climate city miles province land topography total season 1999 square rate economy million products 1996 growth copra economic 1997 food scale exports rice fish Mining such data ~ studying graphs, or graph mining
What apps naturally deal w/ graphs? Semantic Web Social Networks World Wide Web Drug Design, Chemical compounds Computer networks Sensor networks Credit: Images are from Google images via search of keywords
What questions to ask about graph data?Graph Data Graph Mining Questions • Academia:NCSU:Admission data • Nodes=students; links=similar academics/demographics • How many distinct academically performing groups of students admitted to NCSU? • Which academic group is the largest? • Given a new student applicant, can we predict which academic group the student will likely belong to? • Are groups of student with similar demographics usually share similar academic performance? • Over the last decade, has the diversity in demographics of accepted student groups increased or decreased? • …
Recap: Data Mining and Graph Mining Application Data Data Objects + Features Questions Mathematical Data Representation (Data Model) Vectors Graphs Matrices Not one hat fits all Tensors Time series More than one models are needed Manifolds Models are related Sets
1PB/year 20-40TB/simulation 850TB 30TB/day Ecology Biology Cosmology Astrophysics Climate Web How much data? My laptop: 60 GB (GigaBytes) – 109 Bytes 1 TB (TeraByte) – 1012 Bytes 1 PB (PetaByte) – 1015 Bytes
– but the Complexity High-dimensional ‘+’ and ‘―’ feedbacks Noisy Petabytes Data Non-linear correlations It is not just the Size
Single gene Complex regulation Analytical tools that find the “dots” from data significantly reduce data. ~30k genes 50 trans elements control single gene expression Challenge: How to “connect the dots” to answer important science/business questions? Data Describes Complex Patterns/Phenomena Howto untangle the riddles of the complexity?
Connecting the Dots Finding the Dots Connecting the Dots Understanding the Dots • Providing Predictive Understanding • Produce bioenergy • Stabilize CO2 • Clean toxic waste Sheer Volume of Data Climate Now: 20-40 Terabytes/year 5 years: 5-10 Petabytes/year Fusion Now: 100 Megabytes/15 min 5 years: 1000 Megabytes/2 min • Advanced Math+Algorithms • Huge dimensional space • Combinatorial challenge • Complicated by noisy data • Requires high-performance computers
Understanding the Dots Finding the Dots Connecting the Dots • How to effectively produce bioenergy? • How to stabilize carbon dioxide? • How to convert toxic into non-toxic waste? • ... Science Questions Why Would Data Mining Matter?Enables solving many large-scale data problems
105 Memory Disk Tape Retrieval Rate Mbytes/s Latency and Speed – Storage Performance CPU, Disk, Network Trend Doubling: log10(Object Size Bytes) CPU: every 1.2 years Disk: every 1.4 years WAN: 0.7 years MIPS/$M GB/$M kB/s Src: Richard Mount, SLAC How to Move and Access the Data? Technology trends are a rate limiting factor Most of these data will NEVERbe touched! Data doubles every 9 months; CPU ―18 months. Naturally distributed but effectively immovable Streaming/Dynamic but not re-computable J. W. Toigo, Avoiding a Data Crunch, Scientific American, May 2000
Human Bandwidth Overload? More data Scalability of analysis in full context Petabytes Terabytes Gigabytes Megabytes More analysis How to Make Sense of Data?Know Your Limits & Be Smart Not humanly possible to browse a petabyte of data. Analysis must reduce data to quantities of interest. Ultrascale Computations: Must be smart about which probe combinations to see! Physical Experiments: Must be smart about probe placement! To see 1 percent of a petabyte at 10 megabytes per second takes: 35 8-hour days!
Analysis algorithms fail for a few gigabytes. Data size n Algorithm Complexity Algorithmic Complexity: Calculate means O(n) Calculate FFT O(n log(n)) Calculate SVD O(r • c) Clustering algorithms O(n2) n nlog(n) n2 100B 10-10sec. 10-10 sec. 10-8 sec. 10KB 10-8 sec. 10-8 sec. 10-4sec. 1MB 10-6 sec. 10-5 sec. 1 sec. 100MB 10-4 sec. 10-3 sec. 3 hrs 10GB 10-2 sec. 0.1 sec. 3 yrs. For illustration chart assumes 10-12 sec. (1Tflop/sec) calculation time per data point What Analysis Algorithms to Use?Even a simple big O analysis can elucidate simplicity. If n=10GB, then what is O(n) or O(n2) on a teraflop computers? 1GB = 109 bytes 1Tflop = 1012 op/sec