260 likes | 407 Views
IE 585. Competitive Network – I Hamming Net & Self-Organizing Map. Competitive Nets. Unsupervised MAXNET Hamming Net Mexican Hat Net Self-Organizing Map (SOM) Adaptive Resonance Theory (ART) Supervised Learning Vector Quantization (LVQ) Counterpropagation. Clustering Net.
E N D
IE 585 Competitive Network – I Hamming Net & Self-Organizing Map
Competitive Nets Unsupervised • MAXNET • Hamming Net • Mexican Hat Net • Self-Organizing Map (SOM) • Adaptive Resonance Theory (ART) Supervised • Learning Vector Quantization (LVQ) • Counterpropagation
Clustering Net • Number of input neurons equal to the dimension of input vectors • Each output neuron represents a cluster the number of output neurons limits the number of clusters that can be performed • The weight vector for an output neuron serves as a representative for the input patterns which the net has placed on that cluster • The weight vector for the winning neuron is adjusted
Winner-Take-All • The squared Euclidean distance is used to determine the closest weight vector to a pattern vector • Only the neuron with the smallest Euclidean distance from the input vector is allowed to update
MAXNET • Developed by Lippmann, 1987 • Can be used as a subset to pick the node whose input is the largest • Completely interconnected (including self-connection) • Symmetric weights • No training • Weights are fixed
Architecture of MAXNET 1 1 1 1
Procedure of MAXNET Initialize activations and weights Update activation of each node If more than one node has a nonzero activation, continue; otherwise, stop
Hamming Net • Developed by Lippmann, 1987 • A maximum likelihood classifier • used to determine which of several exemplar vectors is most similar to an input vector • Exemplar vectors determine the weights of the net • Measure of similarity between the input vector and the stored exemplar vectors is (n – HD between the vectors)
Architecture of Hamming Net MAXNET y1 y2 B B x1 x2 x3 x4
Procedure of the Hamming Net Initialize weights to store the m exemplar vectors For each input vector x compute initialize activation for MAXNET MAXNET iterates for find the best match exemplar
Mexican Hat Net • Developed by Kohonen, 1989 • Positive weight with “cooperative neighboring” neurons • Negative weight with “competitive neighboring” neurons • Not connect with far away neurons
Teuvo Kohonen • http://www.cis.hut.fi/teuvo/ (his own home page) • published his work starting in 1984 • LVQ - learning vectorquantization • SOM - self organizingmap Professor at Helsinki Univ. Finland
SOM • Also called Topology-Preserving Maps or Self-Organizing Feature Maps (SOFM) • “Winner Take All” learning (also called competitive learning) • winner has the minimum Euclidean distance • learning only takes place for winner • final weights are at the centroids of each cluster • Continuous inputs, continuous or 0/1 (winner take all) outputs • No bias, fully connected • used for data mining and exploration • supervised version exists
O U T P U T S (y’s) I N P U T S (a’s) W n Input Layer Architecture of SOM Net
Procedure of SOM Initialize weights uniformly and normalize to unit length Normalize inputs to unit length Present an input vector x calculate Euclidean distance between x and all Kohonen neurons select winning output neuron j (with the smallest distance) update the winning neuron re-normalize weights to j (sometimes skipped) present next training vector
Method • Normalize input vectors, a, by: • Normalize weight vectors, w, by: • Calculate distance from a to each w by:
Min d wins (this is the winning neuron) • Update w of the min d neuron by: • Return to 2 and repeat for all input vectors a • Reduce if applicable • Repeat until weights converge (stop changing)
SOM Example - 4 patterns =0.25
Adding a “conscience” • prevents neurons from winning too many training vectors using a bias (b) factor • winner had min (d-b) where bj=10(1/n-fj) (n=# output neurons) fjnew=fjold+0.0001(yj-fjold) finitial=1/n • for neurons that win, b becomes negative and for neurons that don’t win, b becomes positive
Supervised Version • Same, except if the winning neuron is “correct” use same weight update: wnew = wold+(a - wold) and • if winning neuron is “incorrect” use: wnew = wold - (a - wold)