1 / 8

Multilayer Perceptron (MLP) Architecture and Learning Rule Overview

Explore Multilayer Perceptron (MLP) architecture with hidden layers, signal flow, and backpropagation learning rule. Learn about neurons' activation functions, training process, and how MLP serves as a universal approximator. Follow the signal flow and computations in MLP training and lab project examples.

diannad
Download Presentation

Multilayer Perceptron (MLP) Architecture and Learning Rule Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Neural NetworksECE.09.454/ECE.09.560Fall 2006 Lecture 4October 9, 2006 Shreekanth Mandayam ECE Department Rowan University http://engineering.rowan.edu/~shreek/fall06/ann/

  2. Plan • Recall: Multilayer Perceptron • Architecture • Signal Flow • Learning rule - Backpropagation • Lab Project 2

  3. Hidden Layers Input Layer j j Output Layer 1 x1 j j j y1 1 Outputs x2 Inputs j j j y2 1 x3 wlk j j wji wkj Multilayer Perceptron (MLP): Architecture

  4. 1 j(t) 0.5 0 -1 1 t MLP: Characteristics • Neurons possess sigmoidal (logistic) activation functions • Contains one or more “hidden layers” • Trained using the “backpropagation” algorithm • MLP with 1-hidden layer is a “universal approximator”

  5. Function signal Error signal Forward propagation Backward propagation MLP: Signal Flow j j j • Computations at each node, j • Neuron output, yj • Gradient vector, dE/dwji

  6. MLP Training k j i Right Left • Forward Pass • Fix wji(n) • Compute yj(n) • Backward Pass • Calculate dj(n) • Update weights wji(n+1) y x k j i Right Left

  7. Lab Project 2 • http://engineering.rowan.edu/~shreek/fall06/ann/lab2.html

  8. Summary

More Related