1 / 26

Neural Network Training Methods & Techniques

Understand how feed-forward, genetic, and self-organizing neural networks are trained and utilized for solving estimation problems. Explore the strengths and weaknesses of neural networks, along with considerations for input attributes, hidden layers, and termination conditions. Learn about sensitivity analysis and the detailed training process. Gain insights into network performance factors and the importance of weight adjustments in network training for classification accuracy.

ffriesen
Download Presentation

Neural Network Training Methods & Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 10 Neural Network

  2. Chapter Objective • Understand how feed-forward networks are used to solve estimation problems. • Know how input and output data conversions are performed for neural networks. • Understand how feed-forward neural networks learn through backpropagation. • Know how genetic learning is applied to train feed-forward neural networks. • Know how self-organizing neural networks perform unsupervised clustering. • List the strengths and weaknesses of neural networks. Chapter 9

  3. Feed-Forward Neural Network Chapter 9

  4. Feed-Forward Neural Network Chapter 9

  5. Neural Network Training: A Conceptual View Chapter 9

  6. Neural Network Training: A Conceptual View Chapter 9

  7. Neural Network Training: A Conceptual View Chapter 9

  8. Neural Network Explanation Sensitivity analysisis a technique that has been successfully applied to gain insight into the effect individual attributes have on neural network output. The general process consists of the following steps: 1. Divide the data into a training set and a test dataset. 2. Train the network with the training data. Chapter 9

  9. Neural Network Explanation • 3. Use the test set data to create a new instance I. Each attribute value for I is the average of all attribute values within the test data. • 4. For each attribute: • a. Vary the attribute value within instance I and present the modification of I to the network for classification. • b. Determine the effect the variations have on the output of the neural network. • c. The relative importance of each attribute is measured by the effect of attribute variations on network output. Chapter 9

  10. General Considerations • The following is a partial list of choices that affect the performance of a neural network model: • What input attributes will be used to build the network? • How will the network output be represented? • How many hidden layers should the network contain? • How many nodes should there be in each hidden layer? • What condition will terminate network training? Chapter 9

  11. Neural Network Training: A Detailed View Chapter 9

  12. Neural Networks • Advantages • prediction accuracy is generally high • robust, works when training examples contain errors • output may be discrete, real-valued, or a vector of several discrete or real-valued attributes • fast evaluation of the learned target function • Criticism • long training time • difficult to understand the learned function (weights) • not easy to incorporate domain knowledge Chapter 9

  13. - mk x0 w0 x1 w1 f å output y xn wn Input vector x weight vector w weighted sum Activation function A Neuron • The n-dimensional input vector x is mapped into variable y by means of the scalar product and a nonlinear function mapping Chapter 9

  14. Network Training • The ultimate objective of training • obtain a set of weights that makes almost all the tuples in the training data classified correctly • Steps • Initialize weights with random values • Feed the input tuples into the network..... one by one • For each unit • Compute the net input to the unit as a linear combination of all the inputs to the unit • Compute the output value using the activation function • Compute the error • Update the weights and the bias Chapter 9

  15. Multi-Layer Perceptron Output vector Output nodes Hidden nodes wij Input nodes Input vector: xi Chapter 9

  16. Chapter Summary • A neural network is parallel computing system of several interconnected processor nodes. • The input to individual network nodes is restricted to numeric values falling in the closed interval range [0,1]. • Because of this, categorical data must be transformed prior to network training. Chapter 9

  17. Chapter Summary • Developing a neural network involves first training the network to carry out the desired computations and then applying the trained network to solve new problems. • During the learning phase, training data is used to modify the connection weights between pairs of nodes so as to obtain a best result for the output node (s). • The feed-forward neural network architecture is commonly used for supervised learning. • Feed-forward neural networks contain a set of layered nodes and weighted connections between nodes in adjacent layers. Chapter 9

  18. Chapter Summary • Feed-forward neural networks are often trained using a backpropagation learning scheme. • Backpropagation learning works by making modifications in weight values starting at the output layer then moving backward through the hidden layers of the network. • Genetic learning can also be applied to train feed-forward networks. Chapter 9

  19. Chapter Summary • The self-organizing Kohonen neural network architecture is a popular model for unsupervised clustering. • A self-organizing neural network learns by having several output nodes complete for the training instances. • For each instance, the output node whose weight vectors most closely match the attribute values of the input instance is the winning node. Chapter 9

  20. Chapter Summary • As a result, the winning node has its associated input weights modified to more closely match the current training instance. • When unsupervised learning is complete, output nodes winning the most instances are saved. • After this, test data is applied and the clusters formed by the test set data are analyzed to help determine the meaning of what has been found. Chapter 9

  21. Chapter Summary • A central issue surrounding neural network is their inability to explain what has been learned. • Despite this, neural network have been successfully applied to solve problems in both the business and scientific world. • Although we have discussed the most popular neural network models, several other architectures and learning rules have been developed. • Jain, Mao, and Mohiuddin (1996) provide a good starting point for learning more about neural networks. Chapter 9

  22. Key Terms Average member technique. An unsupervised clustering neural network explanation technique where the most typical member of each cluster is computed by finding the average value for each class attribute. Backpropagation learning. A training method used with many feed-forward networks that works by making modifications in weight values starting at the output layer then moving backward through the hidden layer. Delta rule. A neural network learning rule designed to minimize the sum of squared errors between computed and target network output. Chapter 9

  23. Key Terms Epoch. One complete pass of the training data through a neural network. Feed-forward neural network. A neural network architecture where all weights at one layer are directed toward nodes at the next network layer. Weights do not cycle back as inputs to previous layers. Fully connected. A neural network structure where all nodes at one layer of the network are connected to all nodes in the next layer. Kohonen network. A two-layer neural network used for unsupervised clustering. Chapter 9

  24. Key Terms Neural network. A parallel computing system consisting of several interconnected processors. Neurode. A neural network processor node. Several neurodes are connected to form a complete neural network structure. Sensitivity analysis. A neural network explanation technique that allows us to determine a rank ordering for the relative importance of individual attributes. Sigmoid function. One of several commonly used neural network evaluation functions. The sigmoid function is continuous and outputs a value between 0 or 1. Chapter 9

  25. Key Terms Linearly separable. Two classes, A and B, are said to be linearly separable if a straight line can be drawn to separate the instances of class B. Perceptron neural network. A simple feed-forward neural network architecture consisting of an input layer and a single output layer. Chapter 9

  26. Reference Data Mining: Concepts and Techniques (Chapter 7 Slide for textbook), Jiawei Han and Micheline Kamber, Intelligent Database Systems Research Lab, School of Computing Science, Simon Fraser University, Canada Chapter 9

More Related