80 likes | 93 Views
Explore Multilayer Perceptron (MLP) architecture with hidden layers, signal flow, and backpropagation learning rule. Learn about neurons' activation functions, training process, and how MLP serves as a universal approximator. Follow the signal flow and computations in MLP training and lab project examples.
E N D
Artificial Neural NetworksECE.09.454/ECE.09.560Fall 2006 Lecture 4October 9, 2006 Shreekanth Mandayam ECE Department Rowan University http://engineering.rowan.edu/~shreek/fall06/ann/
Plan • Recall: Multilayer Perceptron • Architecture • Signal Flow • Learning rule - Backpropagation • Lab Project 2
Hidden Layers Input Layer j j Output Layer 1 x1 j j j y1 1 Outputs x2 Inputs j j j y2 1 x3 wlk j j wji wkj Multilayer Perceptron (MLP): Architecture
1 j(t) 0.5 0 -1 1 t MLP: Characteristics • Neurons possess sigmoidal (logistic) activation functions • Contains one or more “hidden layers” • Trained using the “backpropagation” algorithm • MLP with 1-hidden layer is a “universal approximator”
Function signal Error signal Forward propagation Backward propagation MLP: Signal Flow j j j • Computations at each node, j • Neuron output, yj • Gradient vector, dE/dwji
MLP Training k j i Right Left • Forward Pass • Fix wji(n) • Compute yj(n) • Backward Pass • Calculate dj(n) • Update weights wji(n+1) y x k j i Right Left
Lab Project 2 • http://engineering.rowan.edu/~shreek/fall06/ann/lab2.html