1 / 21

Multi-Layer Perceptrons Michael J. Watts mike.watts.nz

Multi-Layer Perceptrons Michael J. Watts http://mike.watts.net.nz. Lecture Outline. Perceptron revision Multi-Layer Perceptrons Terminology Advantages Problems. Perceptron Revision. Two neuron layer networks Single layer of adjustable weights Feed forward network

opa
Download Presentation

Multi-Layer Perceptrons Michael J. Watts mike.watts.nz

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multi-Layer Perceptrons Michael J. Watts http://mike.watts.net.nz

  2. Lecture Outline Perceptron revision Multi-Layer Perceptrons Terminology Advantages Problems

  3. Perceptron Revision Two neuron layer networks Single layer of adjustable weights Feed forward network Cannot handle non-linearly-separable problems Supervised learning algorithm Mostly used for classification

  4. Multi-Layer Perceptrons Otherwise known as MLPs Adds an additional layer (or layers) of neurons to a perceptron Additional layer called hidden (or intermediate) layer Additional layer of adjustable connections

  5. Multi-Layer Perceptrons Proposed mid-1960s, but learning algorithms not available until mid-1980s Continuous inputs Continuous outputs Continuous activation functions used e.g. sigmoid, tanh Feed forward networks

  6. Multi-Layer Perceptrons

  7. Multi-Layer Perceptrons

  8. Multi-Layer Perceptrons MLPs can also have biases A bias is an additional, external, input Input to bias node is always one Bias performs the task of a threshold

  9. Multi-Layer Perceptrons

  10. Multi-Layer Perceptrons Able to model non-linearly separable functions Each hidden neuron is equivalent to a perceptron Each hidden neuron adds a hyperplane to the problem space Requires sufficient neurons to partition the input space appropriately

  11. Terminology Some define MLP according to the number of connection layers Others define MLP according to the number of neuron layers Some don’t count the input neuron layer doesn’t perform processing Some refer to the number of hidden neurons layers

  12. Terminology Best to specify which i.e. “three neuron layer MLP”, or “two connection layer MLP”, or “three neuron layer, one hidden layer MLP”

  13. Advantages A MLP with one hidden layer of sufficient size can approximate any continuous function to any desired accuracy Kolmogorov theorem MLP can learn conditional probabilities MLP are multivariate non-linear regression models

  14. Advantages Learning models several ways of training a MLP backpropagation is the most common Universal function approximators Able to generalise to new data can accurately identify / model previously unseen examples

  15. Problems with MLP Choosing the number of hidden layers how many are enough? one should be sufficient, according to the Kolmogorov Theorem two will always be sufficient

  16. Problems with MLP Choosing the number of hidden nodes how many are enough? Usually, the number of connections in the network should be less than the number of training examples As number of connections approaches the number of training examples, generalisation decreases

  17. Problems with MLP Representational capacity Catastrophic forgetting Occurs when a trained network is further trained on new data Network forgets what it learned about the old data Only knows about the new data

  18. Problems with MLP Initialisation of weights Random initialisation can cause problems with training start in a bad spot Can initialise using statistics of training data PCA Nearest neighbour

  19. Problems with MLP Weight initialisation can initialise using an evolutionary algorithm initialisation from decision tress initialisation from rules

  20. Summary MLP overcome the linear separability problem of perceptrons Add an additional layer of neurons Additional layer of variable connections No agreement on terminology to use be precise!

  21. Summary Convergence is guaranteed over continuous functions Problems exist but solutions also exist

More Related