1 / 69

Kak Neural Network

Kak Neural Network. Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com. Corner Classification approach. Corners For XOR Function:. 0. 1. 1. 0. Corner Classification approach….

Download Presentation

Kak Neural Network

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com

  2. Corner Classification approach Corners For XOR Function: 0 1 1 0

  3. Corner Classification approach… • Map n-dimensional binary vectors (input) into m-dimensional binary vectors (as output) • Mapping function (f) is: • Using…: • Backpropagation (does not quarantee convergence). • …

  4. Introduction • Feedback (Hopfield with delta learning) and feedforward (backpropagation) networks learn patterns slowly: the network must adjust weights connecting links between input and output until it obtains the correct response to the training patterns. • But biological learning is not a single process: some forms are very quick and others relatively slow. Short-term biological memory, in particular, works very quickly, so slow neural network models are not plausible candidates in this case

  5. Training feedforward NN [1] • Kak proposed CC1,CC2 in January 1993. • Example: • Exclusive-OR mapping

  6. Training feedforward NN [1] • Kak proposed CC1,CC2 in January 1993. • Example: • Exclusive-OR mapping

  7. CC1 as an example • Initialize all weight with zero. • If result is true do nothing. • If result=1 and supervise say 0 subtract x vector from weight vector. • If result=0 and supervise say 1 add x vector to weight vector. Hidden Layer as corners Output Layer (OR Gate) Input Layer X1 1 0 1 y1 1 X2 1 0

  8. CC1… • Result on first output corner:

  9. CC1… • Result on second output corner:

  10. CC1 Algorithm • Notations: • Mapping is Y=f(X), X,Y are n and m dimensional binary vectors. Therefore we have (i=1,…,k) (k=number of vectors). • Weight of Vector: number of 1 element on it. • If the k output sequence are written out in an array then the columns may be viewed as a sequence of m, k dimensional vectors . • Weight of is . • Weight of is . • Weight of is .

  11. CC1 Algorithm… • Start with the random initial weight vector. • If the neuron says no when it should say yes, add the input vector to the weight vector. • If the neuron says yes when it should say no, subtract the input vector from the weight vector. • Do nothing otherwise. • Note that a main problem is “what’s the number neurons in the hidden layer?”

  12. Number of hidden neurons • Consider that: • And the number of hidden neurons can be reduced by the duplicating neurons equals to:

  13. Number of hidden neurons… • Theorem: The number of hidden neurons required to realize the mapping ,i=1,2,…,k is equal to: • And since we can say: • The number of hidden neurons required to realize the mapping is at most k.

  14. Real Applications problem [1] • Comparison Training results:

  15. Proof of convergence [1] • We would establish that the classification algorithm converges if there is a weight vector such that for the corner that needs to be classified, and otherwise. • Wt is the weight vector of t-th iteration • Θ is the angle between and Wt • If neuron say no, when it must say yes:

  16. Proof of convergence… • Numerator on cosine becomes: • produces correct result, we know that: • And: • And we get same inequality for the other type of misclassification( ).

  17. Proof of convergence… • Repeating this process for t iteration produces: • For the cosine’s denominator( ): • If neuron says no we have then: • And same result will be obtained for other type of misclassification( ).

  18. Proof of convergence… • Repeating substitution produces: • Since ,we have: • Then we have:

  19. Proof of convergence… • From (1), (2) we can say:

  20. Types of memory Long-term In AI like BP & RBF,… Short-term Learn instantaneously with good generalization

  21. Current network characteristics What the problem of BP and RBF They require iterative training Take long time to learn Sometimes doesn’t converge Result They are not applicable in real-time application They could never learn short-term, instantaneously-learned memory (the most significant aspects of biological working memory ).

  22. 0 1 -1 1 1 0 1 -1 CC2 algorithm • In this algorithm weight are given as follows: • The value of implies that the threshold of hidden neurons to separate this sequence is . • Ex: • Result of CC2 on last example is: W3 = -(si-1)=-(1-1)=0

  23. Real Applications problem • Comparison Training results:

  24. CC2’s Generalization…[3] • Hidden neurons’ weight are: • r is the radius of the generalized region • If no generalization is needed then r = 0. • For function mapping, where the input vectors are equally distributed into the 0 and the 1 classes, then:

  25. About choice of h[3] • consider a 2¡dimensional problem: • The function of the hidden node can be expressed by the separating line:

  26. About choice of h[3] • Assume that the input pattern being classified is (0 1), then x2 = 1. Also,w1 = h, w2 = 1, and s = 1. The equation of the dividing line represented bythe hidden node now becomes:

  27. About choice of h… (h=-1 and r=0)

  28. About choice of h… (h=-0.8 and r=0)

  29. About choice of h… (h=-1 and r=1)

  30. CC4 [6] • The CC4 network maps an input binary vector X to an output vector Y. • The input and output layers are fully connected. • The neurons are all binary neurons with binary step activation function as follows: • The number of hidden neurons is equal to the number of training samples with each hidden neuron representing one training sample.

  31. CC4 Training[6] • Let be the weight of the connection from input neuron i to hidden neuron j and • let be the input for the i-th input neuron when the j-th training sample is presented to the network. • Then the weights are assigned as follows:

  32. CC4 Training [6]… • Let be the weight of the connection from j-th hidden neuron to the k-th output neuron. • let be the output of the k-th output neuron for the j-th training sample. • The value of are determined by the following equation:

  33. Figure 1 Figure 2 Sample of CC4 • Consider The 16 by 16 area of a spiral pattern that contains 256 binary pixel (as black and white) as figure 2.. • And we want to train a system with 1 exemplar sample as figure 2 that total 75 point are used for training.

  34. Sample of CC4… 16 • We can code 16 integer numbers with 4 binary bits. • Therefore for location (x,y), we will use 4 bits for x and 4 bits for y, and 1 extra bit (always equal to 1) for the bias. • Totally we have 9 inputs. 16

  35. Sample of CC4… # corner 0 -1 (5,6) 1 1 -1 0 -1 1 -1 0 1 1 -1 0 -1 0 0 corner r-s+1=r-3+1=r-2

  36. Sample of CC4 result… Original spiral Training sample Output, r=1 • Number of point classified /misclassified in the spiral pattern. Output, r=4 Output, r=2 Output, r=3

  37. FC motivation Disadvantages of CCs algorithm Input and output must be discrete Input is best presented in a unary code increases the number of input neurons considerably Degree of generalization for all nodes is the same

  38. In reality this degree vary from node to node We need to work on real data An interative version of the CC algorithm that does provide a varying degree of generalization has been devised . Problem : It is not instantaneous Problem

  39. Fast classification network What is FC? a generalization of the CC network This network can operate on real data directly Learn instantaneously It reduces to CC in a way that : data is binary amount of generalization is fixed

  40. Input X=( x1, x2, …,xk ) , F(x) Y • All xi and Y are real data • K is determined by problem nature • What to do • Define weight for input & output weight • Define radius of generalization

  41. Input

  42. FC network structure

  43. The hidden neurons

  44. The rule base Rule 1: IF m = 1, THEN assign μi using single-nearest-neighbor (1NN) Rule 2: IF m = 0, THEN assign μi using k-nearest-neighbor (kNN) heuristic. • M=the number of hithat equal to 0 • value of k is typically a small fraction of the size of the training set. • Membership grades are normalized,

  45. 1NN heuristic • when exactly one element in the distance vector h is 0

  46. kNN heuristic Based on k nearest neighbors. Triangular membership

  47. Training of the FC network Training involves two separate step: • Step1:input and output weights are prescribed simply by inspection of the training input/output pairs • Step2:the radius of generalization for each hidden neuron is determined ri=1/2dmin i

  48. Radius of generalization hard generalization with separated decision regions Soft generalization together with interpolation

  49. Generalization by fuzzy membership The output neuron then computes the dot product between the output weight vector and the membership grade vector

  50. Other consideration • Other membership function. quadratic function known as S

More Related