650 likes | 662 Views
Elective-I. Examination Scheme- In semester Assessment: 30 End semester Assessment :70. Data Mining Techniques and Applications. Text Books: Data Mining Concepts and Techniques- Micheline Kamber Introduction to Data Mining with case studies- G.k.Gupta. Reference Books:
E N D
Elective-I Examination Scheme- In semester Assessment: 30 End semester Assessment :70 Data Mining Techniques and Applications Text Books: Data Mining Concepts and Techniques- Micheline Kamber Introduction to Data Mining with case studies-G.k.Gupta Reference Books: Mining the Web Discovering Knowledge from Hypertext data- Saumencharkrobarti Reinforcement and systemic machine learning for decision making- ParagKulkarni
Unit-1) Introduction, Knowledge of data, Data processing.. • Data mining described • Need of data mining • Kinds of pattern and technologies • Issues in mining • KDD vs. Data Mining • Machine learning Concepts • OLAP • Knowledge Representation • Data Preproccesing-Cleaning,integration,Reduction,Transformation and Discretization • Application with mining aspect(Weather Prediction)
Some Definitions • Data : Data are any facts, numbers, or text that can be processed by a computer. • operational or transactional data such as, sales, cost, inventory, payroll, and accounting • nonoperational data, such as industry sales, forecast data, and macro economic data • meta data - data about the data itself, such as logical database design or data dictionary definitions • Information: The patterns, associations, or relationships among all this data can provide information.
Definitions Continued.. • Knowledge: Information can be converted into knowledge about historical patterns and future trends. For example, summary information on retail supermarket sales can be analyzed in terms of promotional efforts to provide knowledge of consumer buying behavior. • Thus, a manufacturer or retailer could determine which items are most susceptible to promotional efforts. • Data Warehouses: Data warehousing is defined as a process of centralized data management and retrieval.
Why need of Data Mining? • The Explosive Growth of Data: from terabytes to petabytes • Data collection and data availability • Automated data collection tools, database systems, Web, computerized society • Major sources of abundant data • Business: Web, e-commerce, transactions, stocks, … • Science: Remote sensing, bioinformatics, scientific simulation, … • Society and everyone: news, digital cameras, YouTube • **We are drowning in data, but starving for knowledge! ** • “Necessity is the mother of invention”—Data mining—Automated analysis of massive data sets
Data mining- is the principle of sorting through large amounts of data and picking out relevant information. In other words… Data mining (knowledge discovery from data) Extraction of interesting (non-trivial,implicit, previously unknown and potentially useful)patterns or knowledge from huge amount of data Other names Knowledge discovery (mining) in databases (KDD), knowledge extraction, data/pattern analysis, data archeology, data dredging, information harvesting, business intelligence, etc. What Is Data Mining?
What is Data Mining? Searching through large amounts of data for correlations, sequences, and trends. Current “driving applications” in sales (targeted marketing, inventory) and finance (stock picking)
Knowledge discovery from data KDD process includes • data cleaning (to remove noise and inconsistent data) • data integration (where multiple data sources may be combined) • data selection (where data relevant to the analysis task are retrieved from the database) • data transformation (where data are transformed or consolidated into forms appropriate for mining by performing summary or aggregation operations)
KDD continued…. • data mining (an essential process where intelligent methods are applied in order to extract data patterns. • pattern evaluation (to identify the truly interesting patterns representing knowledge based on some interestingness measures) • knowledge presentation (where visualization and knowledge representation techniques are used to present the mined knowledge to the user) Data mining is a core of knowledge discovery process
Knowledge Discovery (KDD) Process Knowledge • Data mining—core of knowledge discovery process Pattern Evaluation Data Mining Task-relevant Data Selection Data Warehouse Data Cleaning Data Integration Databases
Knowledge Process • Data cleaning – to remove noise and inconsistent data • Data integration – to combine multiple source • Data selection – to retrieve relevant data for analysis • Data transformation – to transform data into appropriate form for data mining • Data mining • Evaluation • Knowledge presentation
Knowledge Process • Step 1 to 4 are different forms of data preprocessing • Although data mining is only one step in the entire process, it is an essential one since it uncovers hidden patterns for evaluation
Knowledge Process • Based on this view, the architecture of a typical data mining system may have the following major components: • Database, data warehouse, world wide web, or other information repository • Database or data warehouse server • Data mining engine • Pattern evaluation model • User interface
Data Mining – on what kind of data? • Relational Database
Data Mining – on what kind of data? • Data Warehouses
Data Mining – on what kind of data? • Transactional Databases • Advanced data and information systems • Object-oriented database • Temporal DB, Sequence DB and Time serious DB • Spatial DB • Text DB and Multimedia DB • … and WWW
Database Technology Statistics Data Mining Visualization Machine Learning Pattern Recognition Other Disciplines Algorithm Data Mining: Confluence of Multiple Disciplines
What kind of patterns can we mine? • In general, data mining tasks can be classified into two categories: descriptive and predictive • Descriptive mining tasks characterize the general properties of the data in database • Predictive mining tasks performs inference on the current data in order to make predictions
Functionalities: • Class Description: Characterization and Discrimination • Mining Frequent Patterns, Associations and correlations • Classification and Prediction • Cluster Analysis • Outlier Analysis • Evolution Analysis
Concept/Class Description: Characterization and Discrimination • Data Characterization: A data mining system should be able to produce a description summarizing the characteristics of customers. • Example: The characteristics of customers who spend more than $1000 a year at (some store called ) AllElectronics. The result can be a general profile such as age, employment status or credit ratings.
Characterization and Discriminationcontinued… • Data Discrimination: It is a comparison of the general features of targeting class data objects with the general features of objects from one or a set of contrasting classes. User can specify target and contrasting classes. • Example: The user may like to compare the general features of software products whose sales increased by 10% in the last year with those whose sales decreased by about 30% in the same duration.
Mining Frequent Patterns, Associations and correlations Frequent Patterns : as the name suggests patterns that occur frequently in data. Association Analysis: from marketing perspective, determining which items are frequently purchased together within the same transaction. Example: An example is mined from the (some store) AllElectronic transactional database. buys (X, “Computers”) buys (X, “software”) [Support = 1%, confidence = 50% ] • X represents customer • confidence = 50% , if a customer buys a computer there is a 50% chance that he/she will buy software as well. • Support = 1%, means that 1% of all the transactions under analysis showed that computer and software were purchased together.
Mining Frequent Patterns, Associations and correlations • Another example: Multidimensional rule: • Age (X, 20…29) ^ income (X, 20K-29K) buys(X, “CD Player”) [Support = 2%, confidence = 60% ] • Customers between 20 to 29 years of age with an income $20000-$29000. There is 60% chance they will purchase CD Player and 2% of all the transactions under analysis showed that this age group customers with that range of income bought CD Player.
Classification and Prediction • Classification is the process of finding a model that describes and distinguishes data classes or concepts..> this model is used to predict the class of objects whose class label is unknown. • Classification model can be represented in various forms such as • IF-THEN Rules • A decision tree • Neural network
Cluster Analysis • Clustering analyses data objects without consulting a known class label. • Example: Cluster analysis can be performed on AllElectronics customer data in order to identify homogeneous subpopulations of customers. These clusters may represent individual target groups for marketing.
Cluster Analysis The figure shows a 2-D plot of customers with respect to customer locations in a city.
Outlier Analysis • Outlier Analysis : A database may contain data objects that do not comply with the general behavior or model of the data. These data objects are outliers. • Example: Use in finding Fraudulent usage of credit cards. Outlier Analysis may uncover Fraudulent usage of credit cards by detecting purchases of extremely large amounts for a given account number in comparison to regular charges incurred by the same account. Outlier values may also be detected with respect to the location and type of purchase or the purchase frequency.
Technologies used… Data mining includes many techniques from Domains bellow: • Statistics • Machine Learning • Database systems and Data Warehouses • Information Retrieval • Visualization • High performance computing
Technologies continued.. • Statistics: It studies Collection,Analyasis Interpretation and presentation of Data. #>Statistical research develops tools for prediction and forecasting using data #>Statistical methods can also be used to verify data mining results.
Conti… • Information Retrieval: It is science of searching for documents or information in documents…
Conti… • Database Systems Data Warehouses: This research focuses on the creation,maintainance and use of databases for organizations and end users.
Continued.. • Machine Learning: It investigates how computers can learn or improve their performance based on data.
KDD vs Data Mining • KDD-(Knowledge Discovery in Databases) is a field of computer science, which includes the tools and theories to help humans in extracting useful and previously unknown information (i.e. knowledge) from large collections of digitized data. • KDD consists of several steps, and Data Mining is one of them.
Conti.. • This process deal with the mapping of low-level data into other forms those are more compact, abstract and useful. This is achieved by creating short reports, modelling the process of generating data and developing predictive models that can predict future cases. • Data Mining:>> is application of a specific algorithm in order to extract patterns from data.
What is the difference between KDD and Data mining? • Although, the two terms KDD and Data Mining are heavily used interchangeably, they refer to two related yet slightly different concepts. KDD is the overall process of extracting knowledge from data while Data Mining is a step inside the KDD process, which deals with identifying patterns in data. In other words, Data Mining is only the application of a specific algorithm based on the overall goal of the KDD process.
Data Preprocessing • Why preprocess the data? • Data cleaning • Data integration and transformation • Data reduction • Discretization • Summary
Why Data Preprocessing? • Data in the real world is dirty • incomplete: missing attribute values, lack of certain attributes of interest, or containing only aggregate data • e.g., occupation=“” • noisy: containing errors or outliers • e.g., Salary=“-10” • inconsistent: containing discrepancies in codes or names • e.g., Age=“42” Birthday=“03/07/1997” • e.g., Was rating “1,2,3”, now rating “A, B, C” • e.g., discrepancy between duplicate records
Why Is Data Preprocessing Important? • No quality data, no quality mining results! • Quality decisions must be based on quality data • e.g., duplicate or missing data may cause incorrect or even misleading statistics. • Data preparation, cleaning, and transformation comprises the majority of the work in a data mining application (around 90%).
Multi-Dimensional Measure of Data Quality • A well-accepted multi-dimensional view: • Accuracy • Completeness • Consistency • Timeliness • Believability • Valueable • Accessibility
Major Tasks in Data Preprocessing • Data cleaning • Fill in missing values, smooth noisy data, identify or remove outliers and noisy data, and resolve inconsistencies • Data integration • Integration of multiple databases, or files • Data transformation • Normalization and aggregation • Data reduction • Obtains reduced representation in volume but produces the same or similar analytical results • Data discretization (for numerical data)
Data Preprocessing… • Why preprocess the data? • Data cleaning • Data integration and transformation • Data reduction • Discretization • Summary
Data Cleaning… • Importance • “Data cleaning is the number one problem in data warehousing” • Data cleaning tasks • Fill in missing values • Identify outliers and smooth out noisy data • Correct inconsistent data • Resolve redundancy caused by data integration
Missing Data • Data is not always available • E.g., many tuples have no recorded values for several attributes, such as customer income in sales data • Missing data may be due to • equipment malfunction • inconsistent with other recorded data and thus deleted • data not entered due to misunderstanding • certain data may not be considered important at the time of entry • not register history or changes of the data
Noisy Data.. • Noise: random error or variance in a measured variable. • Incorrect attribute values may due to • faulty data collection instruments • data entry problems • data transmission problems • etc • Other data problems which requires data cleaning • duplicate records, incomplete data, inconsistent data
How to Handle Noisy Data? • Binning method: • first sort data and partition into (equi-depth) bins • then one can smooth by bin means, smooth by bin median, smooth by bin boundaries, etc. • Clustering • detect and remove outliers • Combined computer and human inspection • detect suspicious values and check by human (e.g., deal with possible outliers)