110 likes | 122 Views
AI: Neural Networks lecture 8 Tony Allen School of Computing & Informatics Nottingham Trent University. Self-Organizing Maps (SOMs). • Winning neuron (orange) found using:. weights of neighbouring neurons (red & purple) updated using:. Recurrent SOM - STORM.
E N D
AI: Neural Networks lecture 8Tony AllenSchool of Computing & InformaticsNottingham Trent University
Self-Organizing Maps (SOMs) • Winning neuron (orange) found using: • weights of neighbouring neurons (red & purple) updated using:
Recurrent SOM - STORM • Recurrency can by built into a SOM by copying all or part of the SOM information from one time step into the input for the next time step. 0000000001 1001010100 Input Context Input vector • In the case of STORM, the context vector represents only the previous winning neuron using an n-bit Graycode coordinate vector
Recurrent SOM Learning Algorithm 1. Initialise weights (random values), set topological neighbourhood and learning rate parameters, clear context vector. 2. For each sequence in training set do steps 3-8 3. For each input vector X in sequence, do steps 4-7 4. For each neuron j, compute Euclidean distance: 5. Find index j such that YJ is a minimum 6. For all units j within a specified neighbourhood of j, and for all i: 7. Copy row & column vector of winning neuron to context vector 8. Clear context vector between sequences. 9. Update learning rate () & reduce topological neighbourhood. 10. Test stopping condition.
STORM: Grammar induction • Simple artificial regular grammar (REBER grammar) • Seven symbols, six states with two recursive states • Example sentence: BTSSXSE
E T S B X S S X E STORM – String memorisation • Storm uses current input and context to memorise strings 1. B T X S E 2. B T S X S E
E E T T S S B B X X S S S S X X E E STORM – Rule construction mechanism • Network then uses similarities in future-context to identify states (functional-equivalence theory). • Binds neurons together into states via temporal Hebbian learning mechanism 1. B T X S E 2. B T S X S E
E T S B X S S X E Results
Recurrent SOM – prediction performance • Network’s ability to learn the grammar was measured using its performance at predicting future symbols in a sequence. Best matching neuron Second best matching neuron context of current winner 1001010100 • Storm predicts the next two symbols by finding the two neurons whose context vector best matches that of the current winning neuron. The input vector of these two matching neurons then represents the predicted next symbols.
Results 10 identical models were trained on separate randomly generated Reber grammar sequences. Two became perfect grammar recognisers being able to correctly predict the next symbol for all training and test sequences. Average post-training recognition rate was 71%
References McQueen, T.A., Hopgood, A.A., Allen, T.J. and Tepper, J.A. "Extracting finite structure from infinite language" Knowledge-Based Systems, 18 (2005) pp 135-141. ISSN: 0950-7051.