340 likes | 360 Views
Explore Unsupervised Artificial Neural Networks like Kohonen Self-Organising Map (SOM) and Adaptive Resonance Theory (ART). Learn about learning rules, SOM clustering, training algorithms, applications, and issues. Dive into Spiking ANNs, Image Segmentation, and more.
E N D
Artificial Neural Networks Unsupervised ANNs
Contents • Unsupervised ANNs • Kohonen Self-Organising Map (SOM) • Structure • Processing units • Learning • Applications • Further Topics: Spiking ANNs Application • Adaptive Resonance Theory (ART) • Structure • Processing units • Learning • Applications • Further Topics: ARTMAP Inteligência Artificial
Unsupervised ANNs • Usually 2-layer ANN • Only input data are given • ANN must self-organise output • Two main models: Kohonen’s SOM and Grossberg’s ART • Clustering applications Output layer Feature layer Inteligência Artificial
Learning Rules • Instar Learning rule: incoming weights of neuron converge to input pattern (previous layer) • Convergence speed is determined by learning rate • Step size proportional to node output value • Neuron learns association between input vectors and their outputs • Outstar Learning rule: outgoing weights of neuron converge to output pattern (next layer) • Learning is proportional to neuron activation • Step size proportional to node input value • Neuron learns to recall pattern when stimulated Inteligência Artificial
Self-Organising Map (SOM) • T. Kohonen (1984) • 2D map of output neurons • Input layer and output layer fully connected • Lateral inhibitory synapses • Model of biological topographic maps, e.g. primary auditory cortex in animal brains (cats and monkeys) • Hebbian learning • Akin to K-means • Data clustering applications Output layer Feature layer Inteligência Artificial
x1 wi1 x2 wi2 wi3 x3 wi4 yi x4 wi5 x5 Neuron i SOM Clustering • Neuron = prototype for a cluster • Weights = reference vector (protoype features) • Euclidean distance between reference vector and input pattern • Competitive layer (winner take all) • In biological systems winner take all via inhibitory synapses • Neuron with reference vector closest to input wins Inteligência Artificial
SOM Learning Algorithm • Only weights of winning neuron and its neighbours are updated • Weights of winning neuron brought closer to input pattern (instar rule) • Reference vector is usually normalised • Neighbourhood function in biological systems via short range excitatory synapses • Decreasing width of neighbourhood ensures increasingly finer differences are encoded • Global convergence is not guaranteed. • Gradual lowering of learning rate ensures stability (otherwise vectors may oscillate between clusters) • At end neurons are “tagged”, similar ones become sub-clusters of larger cluster N(t) = Neighbourhood function E(t0) E(t1) E(t2) E(t3) Inteligência Artificial
SOM Mapping • Adaptive Vector Quantisation • Reference vectors iteratively moved towards centres of (sub)clusters • Best performing on gaussian distributions (distance is radial) Inteligência Artificial
SOM Topology • Surface of map reflects frequency distribution of input set, i.e. the probability of input class occurring. • More common vector ``types’’ occupy proportionally more of output map. • The more frequent the pattern type, the finer grained the mapping. • Biological correspondence in brain cortex • Map allows dimension reduction and visualisation of input data Inteligência Artificial
Some Issues about SOM • SOM can be used on-line (adaptation) • Neurons need to be labelled • Manually • Automatic algorithm • Sometimes may not converge • Precision not optimal • Some neurons may be difficult to label • Results sensitive to choice of input features • Results sensitive to order of presentation of data • Epoch learning Inteligência Artificial
SOM Applications • Natural language processing • Document clustering • Document retrieval • Automatic query • Image segmentation • Data mining • Fuzzy partitioning • Condition-action association Inteligência Artificial
Further Topics – Spiking ANNs • Image segmentation task • SOM of spiking units • Lateral connections • Short range excitatory • Long range inhibitory • Train using Hebbian Learning • Train showing one pattern at a time Inteligência Artificial
Spiking SOM Training • Hebbian Learning • Different learning coefficients • afferent weights la • lateral inhibitory weights li • lateral excitatory weights le • Initially learn long-term correlations for self-organisation • Then learn activity modulation for segmentation N = normalisation factor la li, le t Inteligência Artificial
y(t) urest+(t-tf) Spiking Neuron Dynamics Inteligência Artificial
Spiking SOM Recall • Show different shapes together • Bursts of neuron activity • Each cluster alternatively fires Inteligência Artificial
Adaptive Resonance Theory (ART) • Carpenter and Grossberg (1976) • Inspired by studies on biological feature detectors • On-line clustering algorithm • Leader-follower algorithm • Recurrent ANN • Competitive output layer • Data clustering applications • Stability-plasticity dilemma Output layer Feature layer Inteligência Artificial
ART Types • ART1 binary patterns • ART2 binary or analog patterns • ART3 hierarchical ART structure • ARTMAP supervised ART Inteligência Artificial
Stability-Plasticity Dilemma • Plasticity: System adapts its behaviour according to significant events • Stability: system behaviour doesn’t change after irrelevant events • Dilemma: how to achieve stability without rigidity and plasticity without chaos? • Ongoing learning capability • Preservation of learned knowledge Inteligência Artificial
top down bottom up (normalised) ART Architecture • Bottom-up weights wij • Normalised copy of vij • Top-down weights vij • Store class template • Input nodes • Vigilance test • Input normalisation • Output nodes • Forward matching • Long-term memory • ANN weights • Short-term memory • ANN activation pattern Inteligência Artificial
new pattern categorisation known unknown Adapt winner node Initialise uncommitted node ART Algorithm • Incoming pattern matched with stored cluster templates • If close enough to stored template joins best matching cluster, weights adapted according to outstar rule • If not, a new cluster is initialised with pattern as template recognition comparison Inteligência Artificial
Recognition Phase • Forward transmission via bottom-up weights • Input pattern matched with bottom-up weights (normalised template) of output nodes • Inner product x•wi • Hypothesis formulation: best matching node fires (winner-take-all layer) • Similar to Kohonen’s SOM algorithm, pattern associated to closest matching template • ART1: fraction of bits of template also in input pattern Innner product x=input pattern wi=bottom-up weight of neuron I N=input features x q wi Inteligência Artificial
Comparison Phase • Backward transmission via top-down weights • Vigilance test: class template matched with input pattern • Hypothesis validation: if pattern close enough to template, categorisation was successful and “resonance” achieved • If not close enough reset winner neuron and try next best matching • Repeat until • Either vigilance test passed • Or hypotheses (committed neurons) exhausted • ART1: fraction of bits of input pattern also in template x=input pattern vi=top-down weight of neuron I r=vigilance threshold Inteligência Artificial
Small r, imprecise Large r, fragmented Vigilance Threshold • Vigilance threshold sets granularity of clustering • It defines basin of attraction of each prototype • Low threshold • Large mismatch accepted • Few large clusters • Misclassifications more likely • High threshold • Small mismatch accepted • Many small clusters • Higher precision Inteligência Artificial
ART2 ART1 Adaptation • Only weights of winner node are updated • ART1: only features common to all members of cluster are kept • ART1: prototype is intersection set of members • ART2: prototype brought closer to last example • ART2: b determines amount of modification Inteligência Artificial
Additional Modules Categorisation result Output layer Gain control Reset module Input layer Input pattern Inteligência Artificial
Reset Module • Fixed connection weights • Implements the vigilance test • Excitatory connection from input lines • Inhibitory connection from input layer • Output of reset module inhibitory to output layer • Disables firing output node if match with pattern is not close enough • Duration of reset signal lasts until pattern is present • New pattern p is presented • Reset module receives excitatory signal E from input lines • All active nodes are reset • Input layer is activated • Reset module receives inhibitory signal I from input layer • I>E • If p•v<r inhibition weakens and reset signal is sent Inteligência Artificial
Gain module • Fixed connection weights • Controls activation cycle of input layer • Excitatory connection from input lines • Inhibitory connection from output layer • Output of gain module excitatory to input layer • Shuts down system if noise produces oscillations • 2/3 rule for input layer • New pattern p is presented • Gain module receives excitation signal E from input lines • Input layer allowed to fire • Input layer is activated • Output layer is activated • Gain module turned down • Now is feedback from output layer that keeps input layer active • If p•v<r output layer switched off and gain allows input to keep firing for another match Inteligência Artificial
2/3 Rule 2 inputs out of 3 are needed for input layer to be active • New pattern p is presented • Input layer is activated • Output layer is activated • Reset signal is sent • New match • Resonance • Input off Inteligência Artificial
Issues about ART • Learned knowledge can be retrieved • Fast learning algorithm • Difficult to tune vigilance threshold • Noise tends to lead to category proliferation • New noisy patterns tend to “erode” templates • ART is sensitive to order of presentation of data • Accuracy sometimes not optimal • Assumes samples distribution to be Gaussian (see SOM) • Only winner neuron is updated, more “point-to-point” mapping than SOM Inteligência Artificial
SOM Plasticity vs. ART Plasticity SOM mapping ART mapping new pattern new pattern Given new pattern, SOM moves previously committed node and rearrange its neighbours, prior learning is partly “forgotten” Inteligência Artificial
ART Applications • Natural language processing • Document clustering • Document retrieval • Automatic query • Image segmentation • Character recognition • Data mining • Data set partitioning • Detection of emerging clusters • Fuzzy partitioning • Condition-action association Inteligência Artificial
Further Topics - ARTMAP Desired output • Composed of 2 ART ANNs and a mapping field • Online, supervised, self-organising ANN • Mapping field: connects output nodes of ART 1 to output nodes of ART 2 • Mapping field trained using hebbian learning • ART 1 partitions input space • ART 2 partitions output space • Mapping field learns stimulus-response associations Input layer ART 2 Output layer Mapping field Output layer ART 1 Input layer Input pattern Inteligência Artificial
Conclusions - ANNs • ANNs can learn where knowledge is not available • ANNs can generalise from learned knowledge • There are several different ANN models with different capabilities • ANNs are robust, flexible and accurate systems • Parallel distributed processing allows fast computations and fault tolerance • ANNs require a set of parameters to be defined • Architecture • Learning rate • Training is crucial to ANN performance • Learned knowledge often not available (black box) Inteligência Artificial
Further Readings Mitchell, T. (1997), Machine Learning, McGraw Hill. Duda, R. O., Hart, P. E., and Stork, D. G. (2000), Pattern Classification, New York: Wiley. 2nd Edition. ANN Glossary www.rdg.ac.uk/CSC/Topic/Graphics/GrGMatl601/Matlab6/toolbox/nnet/a_gloss.html Inteligência Artificial