130 likes | 472 Views
Multi-Layer Perceptron. Ranga Rodrigo February 8, 2014. Introduction. Perceptron can only be a linear classifier. We can have a network of neurons ( perceptron -lik e structures) with an input layer, one or more hidden layers, and an output layer.
E N D
Multi-Layer Perceptron Ranga Rodrigo February 8, 2014
Introduction • Perceptron can only be a linear classifier. • We can have a network of neurons (perceptron-like structures) with an input layer, one or more hidden layers, and an output layer. • Each layer consists of many neurons and the output of a layer is fed as inputs to all neurons of the next layer.
N1xN2 weights Layer L (output layer) Layer 1 Layer 2 Layer k
Description of the MLP • In each layer, there are Nkelements (neurons), k= 1, ...,L, denoted as Nki, • Each neuron may be a sigmoidal neuron. • There are N0inputs, to which signals x1(t), ..., xN0(t), are applied, notated in the form of a vector • The output signal of i-th neuron in k-thlayer is denoted as , .
Description of Parameters Input vector for kth layer Input for kth layer from the output of (k-1) layer (except for k=1, i = 0) weights of neuron
Forward Pass Output signals of Lth layer Output desired signals
Backpropagation Weights update