340 likes | 577 Views
Neural networks (NN) and Multivariate Adaptive Regression Splines (MARS). Different types of neural networks Considerations in neural network modelling Multivariate Adaptive Regression Splines. Feed forward neural network. Feed-forward neural network Input layer Hidden layer(s)
E N D
Neural networks (NN) andMultivariate Adaptive Regression Splines (MARS) • Different types of neural networks • Considerations in neural network modelling • Multivariate Adaptive Regression Splines Data mining and statistical learning - lecture 12
Feed forward neural network Feed-forward neural network • Input layer • Hidden layer(s) • Output layer … f1 fK z1 z2 … zM … x1 x2 xp Data mining and statistical learning - lecture 12
Terminology • Feed-forward network • Nodes in one layer are connected to the nodes in next layer • Recurrent network • Nodes in one layer may be connected to the ones in previous layer or within the same layer Data mining and statistical learning - lecture 12
Multilayer perceptrons • Any number of inputs • Any number of outputs • One or more hidden layers with any number of units. • Linear combinations of the outputs from one layer form inputs to the following layers • Sigmoid activation functions in the hidden layers. … f1 fK z1 z2 … zM … x1 x2 xp Data mining and statistical learning - lecture 12
Parameters in a multilayer perceptron • C1, C2 : combination function • g, : activation function • 0m0k : bias of hidden unit • imjk : weight of connection Data mining and statistical learning - lecture 12
Least squares fitting of neural networks Consider a simple perceptron (no hidden layer) Find weights and bias minimizing the error function f1 f2 fK … x1 x2 xp Data mining and statistical learning - lecture 12
Alternative measures of fit • For regression we normally use the sum-of-squared errors as measure of fit • For classification we use either squared errors or cross-entropy (deviance) and the corresponding classifier is argmaxkfk(x) • The measure of fit can also be adapted to specific distributions, such as Poisson distributions Data mining and statistical learning - lecture 12
Combination and activation functions • Combination function • Linear combination: • Radial combination: • Activation function in the hidden layer • Identity • Sigmoid • Activation function in the output layer • Softmax • Identity Data mining and statistical learning - lecture 12
Ordinary radial basis function networks (ORBF) • Input and output layers and one hidden layer • Hidden layer: Combination function=radial Activation function=exponential, softmax • Output layer: Combination function=linear Activation function =any, normally identity … f1 fK z1 z2 … zM … x1 x2 xp Data mining and statistical learning - lecture 12
Issues in neural network modelling • Preliminary training – learning with different initial weights (since multiple local minima are possible) • Scaling of the inputs is important (standardization) • The number of nodes in the hidden layer(s) • The choice of activation function in the output layer • Interval – identity • Nominal – softmax Data mining and statistical learning - lecture 12
Overcoming over-fitting • Early stopping • Adding a penalty function Objective function=Error function+Penalty term Data mining and statistical learning - lecture 12
MARS: Multivariate Adaptive Regression Splines An adaptive procedure for regression that can be regarded as a generalization of stepwise linear regression Data mining and statistical learning - lecture 12
Reflected pair of functionswith a knot at the value x1 Data mining and statistical learning - lecture 12
Reflected pairs of functionswith knots at the values x1and x2 x1 x2 Data mining and statistical learning - lecture 12
MARS with a single input Xtaking the values x1, …, xN Form the collection of base functions Construct models of the form where each hm(X) is a function in C or a product of two or more such functions Data mining and statistical learning - lecture 12
MARS model with a single input Xtaking the values x1, x2 x1 x2 Data mining and statistical learning - lecture 12
MARS model with a single input Xtaking the values x1, x2 x1 x2 Data mining and statistical learning - lecture 12
MARS: Multivariate Adaptive Regression Splines At each stage we consider as a new basis function pair all products of functions already in the model with one of the reflected pairs in the set C Although each basis function depends only on a single Xj it is considered as a function over the entire input space Data mining and statistical learning - lecture 12
MARS: Multivariate Adaptive Regression Splines- model selection MARS functions typically overfit the data and so a backward deletion procedure is applied The size of the model is determined by Generalized Cross Validation An upper limit can be set on the order of interaction Data mining and statistical learning - lecture 12
The MARS model can be viewed as a generalization of the classification and regression tree (CART) Data mining and statistical learning - lecture 12
Some characteristics of different learning methods Data mining and statistical learning - lecture 12
Separating hyperplane Data mining and statistical learning - lecture 12
Optimal separating hyperplane- support vector classifier Find the hyperplane that creates the biggest margin between the training points for class 1 and -1 margin Data mining and statistical learning - lecture 12
Formulation of the optimization problem Signed distance to decision border y=1 for one of the groups and y=-1 for the other one Data mining and statistical learning - lecture 12
Two equivalent formulations of the optimization problem Data mining and statistical learning - lecture 12
Characteristics of the support vector classifier Points well inside their class boundary do not play a big role in the shaping of the decision border Cf. linear discriminant analysis (LDA) for which the decision boundary is determined by the covariance matrix of the class distributions and their centroids Data mining and statistical learning - lecture 12
Support vector machinesusing basis expansions (polynomials, splines) Data mining and statistical learning - lecture 12
Characteristics of support vector machines The dimension of the enlarged feature space can be very large Overfitting is prevented by a built-in shrinkage of beta coefficients Irrelevant inputs can create serious problems Data mining and statistical learning - lecture 12
The SVM as a penalization method Misclassification: f(x) < 0 when y=1 or f(x)>0 when y=-1 Loss function: Loss function + penalty: Data mining and statistical learning - lecture 12
The SVM as a penalization method Minimizing the loss function + penalty is equivalent to fitting a support vector machine to data The penalty factor is a function of the constant providing an upper bound of Data mining and statistical learning - lecture 12
Some characteristics of different learning methods Data mining and statistical learning - lecture 12