1 / 75

Data Mining

Data Mining. Peter Fox Data Science – ITEC/CSCI/ERTH-4750/6750 Week 9, October 29, 2013. 2013 Project Teams (final). Laura K, Quan W, Junbin S, Varghese A, Han L, Yudong Z Ankita K, Dan B, Boilang Z, Xiaoman P, Mike M, Jianguo Z Dan F, Yao D, Gouravjeet S, Mengyu Y, Altan G, Miaomiao Z

tamira
Download Presentation

Data Mining

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Mining Peter Fox Data Science – ITEC/CSCI/ERTH-4750/6750 Week 9, October 29, 2013

  2. 2013 Project Teams (final) • Laura K, Quan W, Junbin S, Varghese A, Han L, Yudong Z • Ankita K, Dan B, Boilang Z, Xiaoman P, Mike M, Jianguo Z • Dan F, Yao D, Gouravjeet S, Mengyu Y, Altan G, Miaomiao Z • Anusha A, Cheng J, Hao L, Nikhil S, Jon D, Jun X • Chengcong D, Sisi L, Ben P, Jianing Z, Mayur R, Brendan A • Matt F, Akshay K, Lu D, Tej S, Ledong Z, Shiyao X • Wenli L, Matthew K, Krishna A, Halley C, Oskari R, Harsha V • Melissa H, Amar K, Fang W, Waqas B, Tao C, Jun Z • Caroline H, Qi L, Liuxun Z, Haohua W, Rui Y • Thom H, Hanyang G, Michael O’K, Zhenzheng Z, Lakshmi C, Anirudh P, Xin Q

  3. Contents • Data Mining what it is, is not, types • Distributed applications – modern data mining • Science example • A specific toolkit and two examples • Classifier • Image analysis – clouds • Week 9 reading

  4. Types of data

  5. Data Mining – What it is • Extracting knowledge from large amounts of data • Motivation • Our ability to collect data has expanded rapidly • It is impossible to analyze all of the data manually • Data contains valuable information that can aid in decision making • Uses techniques from: • Pattern Recognition • Machine Learning • Statistics • High Performance Database Systems • OLAP • Plus techniques unique to data mining (Association rules) • Data mining methods must be efficient and scalable

  6. Data Mining – What it isn’t • Small Scale • Data mining methods are designed for large data sets • Scale is one of the characteristics that distinguishes data mining applications from traditional machine learning applications • Foolproof • Data mining techniques will discover patterns in any data • The patterns discovered may be meaningless • It is up to the user to determine how to interpret the results • “Make it foolproof and they’ll just invent a better fool” • Magic • Data mining techniques cannot generate information that is not present in the data • They can only find the patterns that are already there

  7. Data Mining – Types of Mining • Classification (Supervised Learning) • Classifiers are created using labeled training samples • Training samples created by ground truth / experts • Classifier later used to classify unknown samples • Clustering (Unsupervised Learning) • Grouping objects into classes so that similar objects are in the same class and dissimilar objects are in different classes • Discover overall distribution patterns and relationships between attributes • Association Rule Mining • Initially developed for market basket analysis • Goal is to discover relationships between attributes • Uses include decision support, classification and clustering • Other Types of Mining • Outlier Analysis • Concept / Class Description • Time Series Analysis

  8. Data Mining in the ‘new’ Distributed Data/Services Paradigm

  9. Science Motivation • Study the impact of natural iron fertilization process (such as a dust storm) on plankton growth and subsequent dimethyl sulfide (DMS) production • Plankton plays an important role in the carbon cycle • Plankton growth is strongly influenced by nutrient availability (Fe/Ph) • Dust deposition is important source of Fe over ocean • Satellite data is an effective tool for monitoring the effects of dust fertilization

  10. Hypotheses • In remote ocean locations there is a positive correlation between the area averaged atmospheric aerosol loading and oceanic chlorophyll concentration • There is a time lag between oceanic dust deposition and the photosynthetic activity

  11. Primary source of ocean nutrients OCEAN UPWELLING WIND BLOWNDUST SAHARA SEDIMENTS FROM RIVER

  12. CLOUDS Factors modulating dust-ocean photosynthetic effect SST CHLOROPHYLL DUST NUTRIENTS SAHARA

  13. Objectives • Use satellite data to determine, if atmospheric dust loading and phytoplankton photosynthetic activity are correlated. • Determine physical processes responsible for observed relationship

  14. Data and Method • Data sets obtained from two instruments: SeaWiFS and MODIS during 2000 – 2006 are employed • MODIS derived AOT (Aerosol Optical Thickness) • SeaWIFS • MODIS • AOT

  15. The areas of study *Figure: annual SeaWiFS chlorophyll image for 2001 8 7 6 1 2 5 3 4 1-Tropical North Atlantic Ocean 2-West coast of Central Africa 3-Patagonia 4-South Atlantic Ocean 5-South Coast of Australia 6-Middle East 7- Coast of China 8-Arctic Ocean

  16. Tropical North Atlantic Ocean  dust from Sahara Desert -0.17504 -0.0902 -0.328 -0.4595 -0.14019 -0.7253 -0.1095 Chlorophyll AOT -0.68497 -0.15874 -0.85611 -0.4467 -0.75102 -0.66448 -0.72603

  17. Arabian Sea  Dust from Middle East 0.59895 0.66618 0.37991 0.45171 0.52250 0.36517 0.5618 Chlorophyll AOT 0.65211 0.76650 0.69797 0.4412 0.75071 0.708625 0.8495

  18. Summary … • Dust impacts oceans photosynthetic activity, positive correlations in some areas NEGATIVE correlation in other areas, especially in the Saharan basin • Hypothesis for explaining observations of negative correlation: In areas that are not nutrient limited, dust reduces photosynthetic activity • But also need to consider the effect of clouds, ocean currents. Also need to isolate the effects of dust. MODIS AOT product includes contribution from dust, DMS, biomass burning etc.

  19. Data Mining – Types of Mining • Classification (Supervised Learning) • Classifiers are created using labeled training samples • Training samples created by ground truth / experts • Classifier later used to classify unknown samples • Clustering (Unsupervised Learning) • Grouping objects into classes so that similar objects are in the same class and dissimilar objects are in different classes • Discover overall distribution patterns and relationships between attributes • Association Rule Mining • Initially developed for market basket analysis • Goal is to discover relationships between attributes • Uses include decision support, classification and clustering • Other Types of Mining • Outlier Analysis • Concept / Class Description • Time Series Analysis

  20. Models/ types • Trade-off between Accuracy and Understandability • Models range from “easy to understand” to incomprehensible • Decision trees • Rule induction • Regression models • Neural Networks Harder

  21. Qualitative and Quantitative • Qualitative • Provide insight into the data you are working with • If city = New York and 30 < age < 35 … • Important age demographic was previously 20 to 25 • Change print campaign from Village Voice to New Yorker • Requires interaction capabilities and good visualization • Quantitative • Automated process • Score new gene chip datasets with error model every night at midnight • Bottom-line orientation

  22. Management • Creation of logical collections • Physical data handling • Interoperability support • Security support • Data ownership • Metadata collection, management and access. • Persistence • Knowledge and information discovery • Data dissemination and publication

  23. Provenance* • Origin or source from which something comes, intention for use, who/what generated for, manner of manufacture, history of subsequent owners, sense of place and time of manufacture, production or discovery, documented in detail sufficient to allow reproducibility

  24. ADaM – System Overview • Developed by the Information Technology and Systems Center at the University of Alabama in Huntsville • Consists of over 75 interoperable mining and image processing components • Each component is provided with a C++ application programming interface (API), an executable in support of scripting tools (e.g. Perl, Python, Tcl, Shell) • ADaM components are lightweight and autonomous, and have been used successfully in a grid environment • ADaM has several translation components that provide data level interoperability with other mining systems (such as WEKA and Orange), and point tools (such as libSVM and svmLight) • Future versions will include Python wrappers and possible web service interfaces

  25. ADaM 4.0 Components

  26. ADaM Classification - Process • Identify potential features which may characterize the phenomenon of interest • Generate a set of training instances where each instance consists of a set of feature values and the corresponding class label • Describe the instances using ARFF file format • Preprocess the data as necessary (normalize, sample etc.) • Split the data into training / test set(s) as appropriate • Train the classifier using the training set • Evaluate classifier performance using test set • K-Fold cross validation, leave one out or other more sophisticated methods may also be used for evaluating classifier performance

  27. ADaM Classification - Example • Starting with an ARFF file, the ADaM system will be used to create a Naïve Bayes classifier and evaluate it • The source data will be an ARFF version of the Wisconsin breast cancer data from the University of California Irvine (UCI) Machine Learning Database: http://www.ics.uci.edu/~mlearn/MLRepository.html • The Naïve Bayes classifier will be trained to distinguish malignant vs. benign tumors based on nine characteristics

  28. Naïve Bayes Classification • Classification problem with m classes C1, C2, … Cm • Given an unknown sample X, the goal is to choose a class that is most likely based on statistics from training data • P(Ci | X) can be computed using Bayes’ Theorem: [1] Equations from J. Han and M. Kamber, “Data Mining: Concepts and Techniques”, Morgan Kaufmann, 2001.

  29. Naïve Bayes Classification • P(X) is constant for all classes, so finding the most likely class amounts to maximizing P(X | Ci) P(Ci) • P(Ci ) is the prior probability of class i. If the probabilities are not known, equal probabilities can be assumed. • Assuming attributes are conditionally independent: • P(xk | Ci) is the probability density function for attribute k [1] Equation from J. Han and M. Kamber, “Data Mining: Concepts and Techniques”, Morgan Kaufmann, 2001.

  30. Naïve Bayes Classification • P(xk | Ci) is estimated from the training samples • Categorical Attributes (non-numeric attributes) • Estimate P(xk | Ci) as percentage of samples of class i with value xk • Training involves counting percentage of occurrence of each possible value for each class • Numeric attributes • Also use statistics of the sample data to estimate P(xk | Ci) • Actual form of density function is generally not known, so Gaussian density is often assumed • Training involves computation of mean and variance for each attribute for each class

  31. Naïve Bayes Classification • Gaussian distribution for numeric attributes: • Where is the mean of attribute k observed in samples of class Ci • And is the standard deviation of attribute k observed in samples of class Ci [1] Equation from J. Han and M. Kamber, “Data Mining: Concepts and Techniques”, Morgan Kaufmann, 2001.

  32. Sample Data Set – ARFF Format

  33. Data management • Metadata? • Data? • File naming? • Documentation?

  34. Splitting the Samples • ADaM has utilities for splitting data sets into disjoint groups for training and testing classifiers • The simplest is ITSC_Sample, which splits the source data set into two disjoint subsets

  35. Splitting the Samples • For this demo, we will split the breast cancer data set into two groups, one with 2/3 of the patterns and another with 1/3 of the patterns: ITSC_Sample -c class -i bcw.arff -o trn.arff -t tst.arff –p 0.66 • The –i argument specifies the input file name • The –o and –t arguments specify the names of the two output files (-o = output one, -t = output two) • The –p argument specifies the portion of data that goes into output one (trn.arff), the remainder goes to output two (tst.arff) • The –c argument tells the sample program which attribute is the class attribute

  36. Provenance? • For this demo, we will split the breast cancer data set into two groups, one with 2/3 of the patterns and another with 1/3 of the patterns: ITSC_Sample -c class -i bcw.arff -o trn.arff -t tst.arff –p 0.66 • What needs to be recorded and why? • What about intermediate files and why? • How are they logically organized?

  37. Training the Classifier • ADaM has several different types of classifiers • Each classifier has a training method and an application method • ADaM’s Naïve Bayes classifier has the following syntax:

  38. Training the Classifier • For this demo, we will train a Naïve Bayes classifier: ITSC_NaiveBayesTrain -c class -i trn.arff –b bayes.txt • The –i argument specifies the input file name • The –c argument specifies the name of the class attribute • The –b argument specifies the name of the classifier file:

  39. Applying the Classifier • Once trained, the Naïve Bayes classifier can be used to classify unknown instances • The syntax for ADaM’s Naïve Bayes classifier is as follows:

  40. Applying the Classifier • For this demo, the classifier is run as follows: ITSC_NaiveBayesApply -c class -i tst.arff –b bayes.txt -o res_tst.arff • The –i argument specifies the input file name • The –c argument specifies the name of the class attribute • The –b argument specifies the name of the classifier file • The –o argument specifies the name of the result file:

  41. Evaluating Classifier Performance • By applying the classifier to a test set where the correct class is known in advance, it is possible to compare the expected output to the actual output. • The ITSC_Accuracy utility performs this function:

  42. Confusion matrix • Gives a guide to accuracy but samples (i.e. bias) are important to take into account

  43. Evaluating Classifier Performance • For this demo, ITSC_Accuracy is run as follows: ITSC_Accuracy -c class -t res_tst.arff –v tst.arff –o acc_tst.txt

  44. Python Script for Classification

  45. How would you modify this?

  46. What is the provenance?

  47. ADaM Image Classification • Classification of image data is a bit more involved, as there is an additional set of steps that must be performed to extract useful features from the images before classification can be performed. • In addition, it is also useful to transform the data back into image format for visualization purposes. • As an example problem, we will consider detection of cumulus cloud fields in GOES satellite images • GOES satellites produce a 5 channel image every 15 minutes • The classifier must label each pixel as either belonging to a cumulus cloud field or not based on the GOES data • Algorithms based on spectral properties often miss cumulus clouds because of the low resolution of the IR channels and the small size of clouds • Texture features computed from the GOES visible image provide a means to detect cumulus cloud fields.

  48. GOES Images - Preprocessing • Segmentation is based only on the high resolution (1km) visible channel. • In order to remove the effects of the light reflected from the Earth’s surface, a visible reference background image is constructed for each time of the day. • The reference image is subtracted from the visible image before it is segmented. • GOES image patches containing cumulus cloud regions, other cloud regions, and background were selected • Independent experts labeled each pixel of the selected image patches as cumulus cloud or not • The expert labels were combined to form a single “truth” image for each of the original image patches. In cases where the experts disagreed, the truth image was given a “don’t know” value

  49. GOES Images - Example GOES Visible Image Expert Labels

More Related