290 likes | 759 Views
Neural Networks Delta Rule and Back Propagation CS570 - Artificial Intelligence Choi, Hyun-il and Jang, Seung-ick CSD., KAIST 2001.04.02. Threshold Logic Unit(TLU). An artificial neuron model the functionality of a neuron Threshold function = f(activation) Activation
E N D
Neural Networks Delta Rule and Back Propagation CS570 - Artificial Intelligence Choi, Hyun-il and Jang, Seung-ick CSD., KAIST 2001.04.02
Threshold Logic Unit(TLU) • An artificial neuron • model the functionality of a neuron • Threshold function = f(activation) • Activation • Threshold Function y y a a Sigmoid Hard limiter
Comparison of 1,2,3-layers :Hard Limiter(1) • Single layer perceptron • half plane decision regions • 2 layer perceptron • any convex region (possibly unbounded) • Convex Hull • 3 layer perceptron • form arbitrarily complex decision regions • separate meshed classes • No more than 3 layers are required
TLU as Classifiers • Two classes (A,B) (C,D) are linearly separable, and (A,D) (B,C) are linearly separable
1 0 (A B) (C D) y1 y2 (A D) (B C) Information for Classifier • Training two output units, and decode them • Two pieces of information are necessary • 4 classes may separated by 2 hyper-planes • (A,B) / (C,D) and (A,D) / (B,C) are linearly separable
Minimizing Error • Find minimum of a function : gradient descent • y = f(x), slope = y/x • find the x position of minimum f(x) • suppose we can find the slope (rate of change of y) at any point
Gradient Descent • If keep repeating step (4) , we can find the value of x associated with the function minimum
Gradient Descent on an Error • Calculate error for each training vector • perform gradient descent on the error considered as function of the weights • find weights which gives minimal errors
Error Function • For each pattern p, error Ep is the function of weight • Typically defined by the square difference between output and target • Total Error is
Gradient Descent on Error • To perform gradient descent • there must be well defined gradient at each point • error must be a continuous function of weight • train activation rather than output • target activation is {-1, 1} • Learning rule : Delta rule
Delta Rule • Error will not be zero • always learning something from input • term (t - a) is known as delta () perceptron delta rule error comparison output activation theoretical hyperplane gradient descent background manipulation on square error
Delta Rule for Sigmoid units • For sigmoid TLU
Multilayer Nets (A, B, C, D) (A,B) (A,D) (x1, x2)
Backpropagation - Theory • Consider both hidden and output nodes • hidden nodes cannot be directly accessed for the purpose of training • use delta rule for the output nodes
Causal chain for Determining Error • define as j = ’(aj)(tj - yj) : measure of rate of change of the error
of Hidden Layer Node • Consider kth hidden node • Credit assignment problem • how much influence has this node had on the error • for the input i to hidden node k, • from hidden node k to output node j • how much k can influence the output node j : wkj • via this, how the output node j affects the error : j • fan out of kth node : Ik
Using the Training Rule (1) Present the pattern at the input layer (2) Let the hidden nodes evaluate their output (3) Let the output nodes evaluate their output using the results of step (2) Forward Pass Backward Pass (4) Apply the target pattern to the output layer (5) Calculate the on the output nodes according to expr. (3) (6) Train each output node using gradient descent expr.(4) (7) Calculate the on the hidden nodes according to expr.(6) (8) Train hidden node using the in step (7) according to expr.(5)
An example • Approximating the function: • Network
Non-linearly Separable Problems • A and B are separated by arbitrarily shaped decision surface • For more complex decision surfaces, need more hidden units • Difficulties : • Number of hidden units • inadequate training set generalization
Generalization • Test vectors which were not shown during training are classified correctly
Overfitting the decision surface • Decision planes are aligned themselves as close to the training data as possible • misclassification of test data can be occurred : Too much freedom 1 hidden unit 2 hidden units
Overfitting : Curve fitting • Actual output y vs. input x Less hidden units : capture underlying trend in the data More hidden units : curve vary more sensitively to training data, generalized poorly
p Local Minima • Start training with p • performing gradient descent • Reach to the minimum Ml : local minima • Local minimum corresponds to a partial solution for the network in response to the training data • Can be solved using Simulated Annealing (Boltzmann machine)
Speed up Learning : Momentum Term(1) • Speed of learning : learning rate • too big : learning is unstable, oscillate back and forth across the minimum • Alter the training rule from pure gradient descent to include a term of last weight change • if previous weight change was large, so will the new one • weight change carries along some momentum to the next iteration • governs the contribution of the momentum term
Number of nodes in hidden layer :Two layer Perceptron • must large enough to form a decision region • must not so large that required weights cannot be reliably estimated from the training data
Number of nodes in hidden layer :Three layer Perceptron • Number of nodes for 2nd layer • must greater than one when decision region cannot be formed from convex area • in worst case, equal to the number of disconnected regions • Number of nodes for 1st layer • Must be sufficient to provide three or more edges for each convex area generated by every second-layer node • More than 3 times as many nodes in 2nd layer nodes