850 likes | 1.26k Views
Artificial neural networks. Ricardo Ñanculef Alegría Universidad Técnica Federico Santa María Campus Santiago. Learning from Natural Systems. Bio-inspired systems Ants colony Genetic algorithms Artificial neural networks The power of the brain Examples: vision, text-processing
E N D
Artificial neural networks Ricardo Ñanculef Alegría Universidad Técnica Federico Santa María Campus Santiago
Learning from Natural Systems • Bio-inspired systems • Ants colony • Genetic algorithms • Artificial neural networks • The power of the brain • Examples: vision, text-processing • Other animals: dolphins, bats
Modeling the Human Brainkey functional characteristics • Learning and generalization ability • Continuous adaptation • Robustesness and fault tolerance
Modeling the Human Brainkey structural characteristics • Massive parallelism • Distributed knowledge representation: memory • Basic organization: networks of neurons receptors Neural nets effectors
Human Brain in numbers • Cerebral cortex: E11 neurons (more than the number of stars in the milky-way) • Massive connectivity: E3 to E4 connections per neuron (in total, E15 connections) • Time response E-3 seconds. Silicon chips E-9 seconds (one million times faster ) • Yet, human are more efficient than computers at computationally complex tasks. Why?
Artificial Neural Networks “ A neural network is a massively parallel distributed processor made up of simple processing units, which has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain in: • Knowledge is acquired by a learning process • Connection strengths between processing units are used to store the acquired knowledge. ” Simon Haykin, “Neural Networks, a comprehensive foundation”, 2nd Ed, Reprint 2005, Prentice Hall
Artificial Neural Networks “ From the perspective of pattern recognition, neural networks can be regarded as an extension of the many conventional techniques which have been developed over several decades (…) for example, discriminant functions, logit ..” Christopher Bishop, “Neural Networks for Pattern Recogniton”, Reprint, 2005, Oxford University Press
Artificial Neural Networks diverse applications • Pattern Classification • Clustering • Function Approximation: Regression • Time Series Forecasting • Optimization • Content-addressable Memory
The beginnings • McCulloch and Pitts, 1943: “A logical calculus of the ideas immanent in nervous activity” • First neuron model based on simplifications about the brain behavior • binary incoming signals • Connection strengths: weights to each incoming signal • binary response: active or inactive • activation threshold or bias • Just some years earlier: boolean algebra
The beginnings • The model activation threshold (bias) connection weights
The beginnings • These neurons can compute logical operations
The beginnings • Perceptron (1958). Rosenblatt proposes the use of “layers of neurons” as a computational tool. • Proposes a training algorithm • Emphasis in the learning capabilities of NN • McCulloch and Pitts derived to automata theory
The perceptron • Architecture …
The perceptron • Notation …
Perceptron Rule • We have a set of patterns with desired responses … • If used for classification … number of neurons in the perceptron clase
Perceptron Rule • Initialize the weights and the thresholds • Present a pattern vector • Update the weights according to • If used for classification … learning rate
Separating hyperplanes • Hyperplane: set L of points satisfying • For any pair of points lying in L • Hence, the normal vector to L is
Separating hyperplanes • Signed distance of any point to L
Separating hyperplanes • Consider a two-class classificaton problem • One class coded as +1 and one as -1 • An input is classified as the sign of the distance to the hyperplane • How to train the classifier?
Separating hyperplanes • An idea: train to minimize the distance of the misclassified inputs to the hyperplane • Note this is very different to train with the quadratic loss of all the points
Gradient Descent • Suppose we have to minimize on • Where is a vector • For example • Iterate:
Stochastic Gradient Descent • Suppose we have to minimize on • Where is a random variable • We have samples of • Iterate:
Separating hyperplanes • If M is fixed • Stochastic gradient descent
Separating hyperplanes • For correctly classified inputs no correction on the parameters is applied • Now, note that
Separating hyperplanes • Perceptron rule
Perceptronconvergence theorem Theorem: If there exists a set of connection weights and activation threshold which is able to separate the two clases, the perceptron algorithm will converge to some solution in a finite number of steps and indepently of the initialization of the weights and bias.
Perceptron • Conclusion: perceptron rule, with two clases, is a stochastic gradient descent algorithm that aims to minimize the distances of the misclassified examples to the hyperplane. • With more than two classes, the perceptron uses one neuron to model a class againts the others. • This is an actual perspective
Delta Rule • Widrow and Hoff • It considers general activation functions
Delta Rule • Update the weights according to …
Delta Rule • Update the weights according to …
Delta Rule • Can the perceptron rule be obtained as a special case from this rule? • Step function is not differentiable • Note that with this algorithm all the patterns are observed before correction, while with the Rosenblatt's algorithm each pattern induces a correction
Perceptrons • Perceptrons and logistic regression • With more than 1 neuron: each neuron has the form of a logistic model of one class against the others.
Neural Networks Death • Minsky: 1969, “Perceptrons”. xn y = 1 b x1 y = -1 ... x1 x2 x3 xn
Neural Networks Death • A perceptron cannot learn the XOR
Neural Networks renaissance • Idea: map the data to a feature space where the solution is linear
Neural Networks renaissance • Problem: this transformation is problem dependent
Neural Networks renaissance • Solution: multilayer perceptrons (FANN) • More biologically plausible • Internal layers learn the map
Architecture: regression each output corresponds to a response
Architecture: classification each output corresponds to a class, such that Training data has to be coded by 0-1 response variables
Theorem: Let an admissible activation function and let be a compact subset of Hence, for any continuous function and for any Universal approximation Theorem
Admissible activation functions Universal approximation Theorem
norm extensions: other output activation functions other norms Universal approximation Theorem
Fitting Neural Networks • The back-propagation algorithm: A generalization of the delta rule for multilayer perceptrons • It is a gradient descent algorithm for the quadratic loss function
Back-propagation • Gradient descent generates a sequence of aproximations related as
For Back-propagation Equations Why back propagation? …
Back-propagation Algorithm • Initialize the weights and the thresholds • For each example i compute • Update the weights according to • Iterate 2 and 3 until convergence
Stochastic Back-propagation • Initialize the weights and the thresholds • For each example i compute • Iterate 2 until convergence