700 likes | 807 Views
Class 4 – More Classifiers. Ramoza Ahsan, Yun Lu, Dongyun Zhang, Zhongfang Zhuang, Xiao Qin, Salah Uddin Ahmed. Lesson 4.1. Classification Boundaries. Classification Boundaries.
E N D
Class 4 – More Classifiers Ramoza Ahsan, Yun Lu, Dongyun Zhang, Zhongfang Zhuang, Xiao Qin, Salah Uddin Ahmed
Lesson 4.1 Classification Boundaries
Classification Boundaries • Visualization of the data in the training stage of building a classifier can provide guidance in parameter selection • Wekavisuilization tool • 2 dimensional data set
Boundary Representation With OneR • Color diagram shows the decision boundaries with training data • Spatial representation of the decision boundary on OneR algorithm
Boundary Representation With IBk • Lazy classifier (instance based learner) • Chooses nearest instance to classify • Piece wise linear boundary • Increasing k will give blurry boundaries
Boundary Representation With Naïve Bayes • Naïve Bayes treats each of the two attribute as contributing equally and independently to decision • When multiple along the two dimensions get a checkerboard pattern of probabilities.
Boundary Representation With J-48 • Increasing the minNumObj parameter will result in simpler tree
Classification Boundaries • Different classifiers have different capabilities for carving up instance space. (“Bias”) • Usefulness: • Important visualization tool. • Provides insight how the algorithm works on data. • Limitations: • Restricted to numeric attributes and 2-dimensional plot.
Lesson 4.2 Linear Regression
What Is Linear Regression? • In statistics, linear regression is an approach to model the relationship between a dependent variable y and one or more explanatory variables denoted X. • Straight-line regression analysis: one explanatory variable. • Multiple linear regression: more than one explanatory variable • In data mining, we use this method to make predictions based on numeric attributes for numeric classes. • NominalToBinary filter
Why Linear Regression? • A regression models the past relationship between variables to predict the future behavior. • Businesses use regression to predict such things as future sales, stock prices, currency exchange rates, and productivity gains resulting from a training program. • Example: A person’s salary is related with years of experience. The dependent variable in this instance is salary and the explanatory variable (also called independent variable) is experience here.
Mathematics Of Simple Linear Regression • The simplest form of regression function is: Where y is the dependent variable, x is the explanatory variable, b and w are regression coefficients. By thinking regression coefficients as weight, we could get: Where:
Previous Example Salary Dataset • From the given dataset we could get: • Thus, we could get • For the instances, we can predict that a person with 10 years experience will get the salary of $58,600 per year.
Multiple Linear Regression • Multiple linear regression is an extension of straight-line regression so as to involve more than one predictor variable. It allows the dependent variable to be modeled as linear summary of n predictor variables described by tuple ( ). Then adjust weights to minimize square error on training data: • This equation is hard to solve by hand, so we need a tool like Weka to do it.
Non-linear Regression • Often no linear relationship between dependent variable(class attribute) and explanatory variables. • Often convert into a linear by a patchwork of serial linear regression models • In Weka, we have “model tree” named M5P method, which can solve this problem. A "model tree" is a tree where each leaf has one of these linear regression models. And we can calculate coefficients for each linear function and then we could make prediction based on this “model tree”.
Lesson 4.3 Classification By Regression
Review: Linear Regression • Several numeric attributes: • Weights of each attributes plus a constant: • Weighed sum of the attributes: • Minimize the squared error:
Using Regression In Classification • Convert the class values to numeric values(usually binary) • Decide the class according to the regression result • The result is NOT the probability!!! • Set the threshold
2-Class Problems • Assign the binary values to the two classes • Training: Linear Regression • Output prediction
Multi-Class Problems • Multi-Response Linear Regression • Divide into n regression problems • Build different model for each problem • Select the model with the largest output
More Investigations • Cool stuff • Lead to the foundation of Logistic Regression • Convert the class value to binary • Add the Linear Regression result as an attribute • Detect the split using OneR
Lesson 4.4 Logistic Regression
Logistic Regression • In linear regression, we useto calculate weights from training data • In Logistic Regression, we useto estimate class probabilities directly. (1) (2)
Classification • Email: Spam / Not Spam? • Online Transactions: Fraudulent (Yes / No)? • Tumor: Malignant / Benign? 0: “negative class” (e.g., Benign tumor) 1: “positive class” (e.g., Malignant tumor) Coursera– Machine Learning – Prof. Andrew Ng from Stanford University
(Yes) 1 Malignant? (No) 0 0.5 Tumor Size Threshold classifier output y at 0.5: If predict “” If predict “” Coursera– Machine Learning – Prof. Andrew Ng from Stanford University
(Yes) 1 Malignant? (No) 0 0.5 Threshold classifier output y at 0.5: If predict “” If predict “” Coursera– Machine Learning – Prof. Andrew Ng from Stanford University
Classification: y = 0 or 1 In Linear Regression, can be >1 or <0 Logistic regression: Coursera– Machine Learning – Prof. Andrew Ng from Stanford University
Logistic Regression Model • We want • Sigmoid functionLogistic function (3) (4) Coursera– Machine Learning – Prof. Andrew Ng from Stanford University
Interpretation Of Hypothesis Output = estimated probability that on input Example: if Tell patient that 70% chance of tumor being malignant(). “Probability that , given , parameterized by . (5) (6) (7) Coursera– Machine Learning – Prof. Andrew Ng from Stanford University
Lesson 4.5 Support Vector Machine
Things About SVM • Better on small and not linear separable data • Low dimensions => High dimensions • Support Vectors • Maximum Marginal Hyperplane
Overview • The support vectors are the most difficult tuples to classify and give the most information regarding classification.
SVM searches for the hyperplane with the largest margin, that is, the Maximum Marginal Hyperplane(MMH).
SVM Demo • CMsoft SVM Demo Tool • Question(s)
More Very resilient to overfitting • Boundary depends on a few points • Parameter setting (regularization) Weka: functions>SMO Restricted to two classes • So use Multi-Response linear regression … or pairwise linear regression Weka: functions>libsvm • External library for support vector machines • Faster than SMO, more sophisticated options
Lesson 4.6 Ensemble Learning