1 / 13

Neural Networks

Neural Networks. John Riebe and Adam Profitt. What is a neuron?. P R Elements of the input vector W Weights ∑ Summer b Bias n Sum of all P elements and b ƒ Translation Function a Output. Weights: Weights are scalars that multiply each input element

lin
Download Presentation

Neural Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural Networks John Riebe and Adam Profitt

  2. What is a neuron? PR Elements of the input vector W Weights ∑ Summer b Bias n Sum of all P elements and b ƒ Translation Function a Output Weights: Weights are scalars that multiply each input element Summer: The summer sums the input elements, PR, together with the bias Bias: A bias is a number that is added to the total from the summer Translation Function: A translation function is one of many specific functions used in neural networking.

  3. Layers of the Neural Network • There are only three different types of layers in a network: • The Input Layer • Moves the input vectors into each neuron of the first hidden layer • The Hidden Layers • Performs the bulk of the computations in most networks • Hidden layers are not always required • The Output Layer • Each neuron in the output layer outputs it’s own result

  4. Translation Functions

  5. Types of Neural Networks • Perceptrons: • Used to classify data. • Applies the hard-limit transfer function. • Usually does not have any hidden layers. • Linear Filters • Used to solve linearly separable problems. • Applies the linear transfer function. • Backpropagation • Generally has only one hidden layer. • Can solve any reasonable problem. • Hidden layers use sigmoid translations, outputs use the linear transfer function

  6. Training Neurons Training a network sets the biases and weights in each neuron • To train a network you need: • A network • An input • A target vector • There are many different types of • training algorithms. To name a few: • Levenberg-Marquardt • BFGS quasi-Newton • Bayesian regularization • One step secant • Random order incremental • Training algorithms • Gives a network an input • Receives the output • Calculates error between output and target • Adjusts weights and biases • Goes back to step 1 Each time the algorithm goes through the steps is called an epoch. Most networks go through many epochs.

  7. MatlabApplication

  8. The newff Function • Create a feed-forward network • Syntax • net = newff • net = newff(PR,[S1 S2...Si],{TF1 TF2...TFi}) • Description • net = newff creates a new network with a dialog box. • newff(PR,[S1 S2...Si],{TF1 TF2...TFi}) takes, • PR - R x 2 matrix of min and max values for R input elements. • Si - Size of ith layer, for Nl layers. • TFi - Transfer function of ith layer, default = 'tansig'.

  9. The train Function Trains a neural network Syntax net = train(net,P,T) Description train trains a network. train(net,P,T) takes, net - Neural network object. P - Network inputs. T - Network targets, default = zeros.

  10. The sim Function The sim function simulates a neural network. This function feeds the network the input, P, and displays the results. Syntaxsim(net,P) Descriptionsim simulates neural networks. sim(net,P) takes, net - Network. P - Network inputs.

  11. Transfer Functions Revisited • Transfer functions: • Hard-Limit • a = hardlim(n) Outputs either a 1 or a 0 • Linear • a = purelin(n) Outputs the scaled and summed input • Log-Sigmoid • a = logsig(n) Squeezes the input to between 0 and 1 • Tan-Sigmoid • a = tansig(n) Squeezes the input to between -1 and 1

  12. The Baum-Haussler Rule The Baum-Haussler Rule is one of the most useful rules for neural networks. Nhidden≤ (Ntrain • Etolerance) / (Npts + Noutputs) This rule helps you determine the maximum number of neurons you will need for your network to function properly. This is NOT a law: it will not work in all situations. Sometimes you just have to use another method.

  13. Bibliography Demuth, Howard and Mark Beale. “Neural Network Toolbox User’s Guide.“ 1992-2003 URL: http://www.mathworks.com/access/helpdesk/help/toolbox/nnet/nnet.shtml

More Related