1 / 28

Neural Networks Delta Rule and Back Propagation CS570 - Artificial Intelligence Choi, Hyun-il and Jang, Seung-ick CSD.,

Neural Networks Delta Rule and Back Propagation CS570 - Artificial Intelligence Choi, Hyun-il and Jang, Seung-ick CSD., KAIST 2001.04.02. Threshold Logic Unit(TLU). An artificial neuron model the functionality of a neuron Threshold function = f(activation) Activation

eloise
Download Presentation

Neural Networks Delta Rule and Back Propagation CS570 - Artificial Intelligence Choi, Hyun-il and Jang, Seung-ick CSD.,

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural Networks Delta Rule and Back Propagation CS570 - Artificial Intelligence Choi, Hyun-il and Jang, Seung-ick CSD., KAIST 2001.04.02

  2. Threshold Logic Unit(TLU) • An artificial neuron • model the functionality of a neuron • Threshold function = f(activation) • Activation • Threshold Function y y a a   Sigmoid Hard limiter

  3. Comparison of 1,2,3-layers :Hard Limiter(1) • Single layer perceptron • half plane decision regions • 2 layer perceptron • any convex region (possibly unbounded) • Convex Hull • 3 layer perceptron • form arbitrarily complex decision regions • separate meshed classes • No more than 3 layers are required

  4. Comparison of 1,2,3-layers :Hard Limiter(2)

  5. TLU as Classifiers • Two classes (A,B) (C,D) are linearly separable, and (A,D) (B,C) are linearly separable

  6. 1 0 (A B) (C D) y1 y2 (A D) (B C) Information for Classifier • Training two output units, and decode them • Two pieces of information are necessary • 4 classes may separated by 2 hyper-planes • (A,B) / (C,D) and (A,D) / (B,C) are linearly separable

  7. Minimizing Error • Find minimum of a function : gradient descent • y = f(x), slope = y/x • find the x position of minimum f(x) • suppose we can find the slope (rate of change of y) at any point

  8. Gradient Descent • If keep repeating step (4) , we can find the value of x associated with the function minimum

  9. Gradient Descent on an Error • Calculate error for each training vector • perform gradient descent on the error considered as function of the weights • find weights which gives minimal errors

  10. Error Function • For each pattern p, error Ep is the function of weight • Typically defined by the square difference between output and target • Total Error is

  11. Gradient Descent on Error • To perform gradient descent • there must be well defined gradient at each point • error must be a continuous function of weight • train activation rather than output • target activation is {-1, 1} • Learning rule : Delta rule

  12. Delta Rule • Error will not be zero • always learning something from input • term (t - a) is known as delta () perceptron delta rule error comparison output activation theoretical hyperplane gradient descent background manipulation on square error

  13. Delta Rule for Sigmoid units • For sigmoid TLU

  14. Multilayer Nets (A, B, C, D) (A,B) (A,D) (x1, x2)

  15. Backpropagation - Theory • Consider both hidden and output nodes • hidden nodes cannot be directly accessed for the purpose of training • use delta rule for the output nodes

  16. Causal chain for Determining Error • define  as j = ’(aj)(tj - yj) : measure of rate of change of the error

  17.  of Hidden Layer Node • Consider kth hidden node • Credit assignment problem • how much influence has this node had on the error • for the input i to hidden node k, • from hidden node k to output node j • how much k can influence the output node j : wkj • via this, how the output node j affects the error :  j • fan out of kth node : Ik

  18. Using the Training Rule (1) Present the pattern at the input layer (2) Let the hidden nodes evaluate their output (3) Let the output nodes evaluate their output using the results of step (2) Forward Pass Backward Pass (4) Apply the target pattern to the output layer (5) Calculate the  on the output nodes according to expr. (3) (6) Train each output node using gradient descent expr.(4) (7) Calculate the  on the hidden nodes according to expr.(6) (8) Train hidden node using the  in step (7) according to expr.(5)

  19. An example • Approximating the function: • Network

  20. Non-linearly Separable Problems • A and B are separated by arbitrarily shaped decision surface • For more complex decision surfaces, need more hidden units • Difficulties : • Number of hidden units • inadequate training set generalization

  21. Generalization • Test vectors which were not shown during training are classified correctly

  22. Overfitting the decision surface • Decision planes are aligned themselves as close to the training data as possible • misclassification of test data can be occurred : Too much freedom 1 hidden unit 2 hidden units

  23. Overfitting : Curve fitting • Actual output y vs. input x Less hidden units : capture underlying trend in the data More hidden units : curve vary more sensitively to training data, generalized poorly

  24. p Local Minima • Start training with p • performing gradient descent • Reach to the minimum Ml : local minima • Local minimum corresponds to a partial solution for the network in response to the training data • Can be solved using Simulated Annealing (Boltzmann machine)

  25. Speed up Learning : Momentum Term(1) • Speed of learning : learning rate  • too big  : learning is unstable, oscillate back and forth across the minimum • Alter the training rule from pure gradient descent to include a term of last weight change • if previous weight change was large, so will the new one • weight change carries along some momentum to the next iteration •  governs the contribution of the momentum term

  26. Speed up Learning : Momentum Term(2)

  27. Number of nodes in hidden layer :Two layer Perceptron • must large enough to form a decision region • must not so large that required weights cannot be reliably estimated from the training data

  28. Number of nodes in hidden layer :Three layer Perceptron • Number of nodes for 2nd layer • must greater than one when decision region cannot be formed from convex area • in worst case, equal to the number of disconnected regions • Number of nodes for 1st layer • Must be sufficient to provide three or more edges for each convex area generated by every second-layer node • More than 3 times as many nodes in 2nd layer nodes

More Related