290 likes | 557 Views
Un Supervised Learning. & Self Organizing Maps. Un Supervised Competitive Learning. In Hebbian networks, all neurons can fire at the same time Competitive learning means that only a single neuron from each group fires at each time step Output units compete with one another.
E N D
Un Supervised Learning & Self Organizing Maps
Un Supervised Competitive Learning • In Hebbian networks, all neurons can fire at the same time • Competitive learning means that only a single neuron from each group fires at each time step • Output units compete with one another. • These are winner takes allunits (grandmother cells)
UnSupervised Competitive Learning • In the hebbian like models, all the neurons can fire together • In Competitive Learning models, only one unit (or one per group) can fire at a time • Output units compete with one another Winner Takes All units (“grandmother cells”)
US Competitive, Cntd • Such networks cluster the data points • The number of clusters is not predefined but is limited to the number of output units • Applications include VQ, medical diagnosis, document classification and more
W11 Y1 x1 W12 x2 W22 WP1 Y2 xN WPN YP Simple Competitive Learning N inputs units P output neurons P x N weights
Simple Model, Cntd • All weights are positive and normalized • Inputs and outputs are binary • Only one unit fires in response to an input
Network Activation • The unit with the highest field hi fires • i* is the winner unit • Geometrically is closest to the current input vector • The winning unit’s weight vector is updated to be even closer to the current input vector • Possible variation: adding lateral inhibition
Learning Starting with small random weights, at each step: • a new input vector is presented to the network • all fields are calculated to find a winner • is updated to be closer to the input
Learning Rule • Standard Competitive Learning Can be formulated as hebbian:
Result • Each output unit moves to the center of mass of a cluster of input vectors clustering
Competitive Learning, Cntd • It is important to break the symmetry in the initial random weights • Final configuration depends on initialization • A winning unit has more chances of winning the next time a similar input is seen • Some outputs may never fire • This can be compensated by updating the non winning units with a smaller update
Model: Horizontal & Vertical linesRumelhart & Zipser, 1985 • Problem – identify vertical or horizontal signals • Inputs are 6 x 6 arrays • Intermediate layer with 8 WTA units • Output layer with 2 WTA units • Cannot work with one layer
H V Rumelhart & Zipser, Cntd
Geometrical Interpretation • So far the ordering of the output units themselves was not necessarily informative • The location of the winning unit can give us information regarding similarities in the data • We are looking for an input output mapping that conserves the topologic properties of the inputs feature mapping • Given any two spaces, it is not guaranteed that such a mapping exits!
Biological Motivation • In the brain, sensory inputs are represented by topologically ordered computational maps • Tactile inputs • Visual inputs (center-surround, ocular dominance, orientation selectivity) • Acoustic inputs
Biological Motivation, Cntd • Computational maps are a basic building block of sensory information processing • A computational map is an array of neurons representing slightly different tuned processors (filters) that operate in parallel on sensory signals • These neurons transform input signals into a place coded structure
Self Organizing (Kohonen) Maps • Competitive networks (WTA neurons) • Output neurons are placed on a lattice, usually 2-dimensional • Neurons become selectively tuned to various input patterns (stimuli) • The location of the tuned (winning) neurons become ordered in such a way that creates a meaningfulcoordinate system for different input features atopographic mapof input patterns is formed
SOMs, Cntd • Spatial locations of the neurons in the map are indicative of statistical features that are present in the inputs (stimuli) Self Organization
Kohonen Maps • Simple case: 2-d input and 2-d output layer • No lateral connections • Weight update is done for the winning neuron and its surrounding neighborhood
Neighborhood Function • F is maximal for i* and drops to zero far from i, for example: • The update “pulls” the winning unit (weight vector) to be closer to the input, and also drags the close neighbors of this unit
The output layer is a sort of an elastic netthat wants to come as close as possible to the inputs • The output maps conserves the topological relationships of the inputs • Both η and σ can be changed during the learning
Topologic Maps in the Brain • Examples of topologic conserving mapping between input and output spaces • Retintopoical mapping between the retina and the cortex • Ocular dominance • Somatosensory mapping (the homunculus)
Models Goodhill (1993) proposed a model for the development of retinotopy and ocular dominance, based on Kohonen Maps • Two retinas project to a single layer of cortical neurons • Retinal inputs were modeled by random dots patterns • Added between eyes correlation in the inputs • The result is an ocular dominance map and a retinotopic map as well
Models, Cntd Farah (1998) proposed an explanation for the spatial ordering of the homunculus using a simple SOM. • In the womb, the fetus lies with its hands close to its face, and its feet close to its genitals • This should explain the order of the somatosensory areas in the homunculus
Other Models • Semantic self organizing maps to model language acquisition • Kohonen feature mapping to model layered organization in the LGN • Combination of unsupervised and supervised learning to model complex computations in the visual cortex
Examples of Applications • Kohonen (1984). Speech recognition - a map of phonemes in the Finish language • Optical character recognition - clustering of letters of different fonts • Angeliol etal (1988) – travelling salesman problem (an optimization problem) • Kohonen (1990) – learning vector quantization (pattern classification problem) • Ritter & Kohonen (1989) – semantic maps
Summary • Unsupervised learning is very common • US learning requires redundancy in the stimuli • Self organization is a basic property of the brain’s computational structure • SOMs are based on • competition (wta units) • cooperation • synaptic adaptation • SOMs conserve topological relationships between the stimuli • Artificial SOMs have many applications in computational neuroscience