120 likes | 290 Views
Introduction to Neural Networks. Freek Stulp. Overview. Biological Background Artificial Neuron Classes of Neural Networks Perceptrons Multi-Layered Feed-Forward Networks Recurrent Networks Conclusion. Biological Background. Neuron consists of: Cell body Dendrites Axon Synapses.
E N D
Introduction to Neural Networks Freek Stulp
Overview • Biological Background • Artificial Neuron • Classes of Neural Networks • Perceptrons • Multi-Layered Feed-Forward Networks • Recurrent Networks • Conclusion
Biological Background • Neuron consists of: • Cell body • Dendrites • Axon • Synapses • Neural activation : • Throught dendrites/axon • Synapses have different strengths
ini= SajWji ai= g(ini) Artificial Neuron Input links (dendrites) Unit (cell body) Output links (axon) aj Wji ai
Ij Wj O W0 -1 W1 in= SajWj a = g(in) a1 a W2 a2 { 0, in<0 g(in) = 1, in>0 a = g(-W0 + W1a1 + W2a2) Class I: Perceptron
Perceptrons can learn mappings from inputs I to outputs O by changing weights W Training setD: Inputs: I0, I1 ... In Targets: T0, T1 ...Tn Example: boolean ORD: Output O of network is not necessary equal to T! Learning in Perceptrons
Learning in Perceptrons • Error often defined as: E(W) = 1/2SdD(td-od)2 • Go towards the minimum error! • Update rules: • wi = wi + Dwi • Dwi = -hdE/dwi • dE/dwi =d/dwi 1/2SdD(td-od)2=SdD(td-od)iid • This is called gradient descent i
Feed-forward: Output links only connected to input links in the next layer • Multiple layers: • hidden layer(s) Input Hidden Output Class II: Multi-layer Feed-forward Networks Complex non-linear functions can be represented
For output layer, weight updating similar to perceptrons. Problem: What are the errors in the hidden layer? Backpropagation Algorithm For each hidden layer (from output to input): For each unit in the layer determine how much it contributed to the errors in the previous layer. Adapt the weight according to this contribution This is also gradient descent Learning in MLFF Networks
Input Hidden Output Class III: Recurrent Networks No restrictions on connections Behaviour more difficult to predict/ understand
Inspiration from biology, though artificial brains are still very far away. Perceptrons too simple for most problems. MLFF Networks good as function approximators. Many of your articles use these networks! Recurrent networks complex but useful too. Conclusion
Literature • Artificial Intelligence: A Modern Approach • Stuart Russel and Peter Norvig • Machine Learning • Tom M. Mitchell