360 likes | 608 Views
Self organizing maps A visualization technique with data dimension reduction Juan López González University of Oviedo Inverted CERN School of Computing, 24-25 February 2014. General overview. Lecture 1 Machine learning Introduction Definition Problems Techniques. Lecture 2
E N D
Self organizing maps A visualization technique with data dimension reduction Juan López González University of Oviedo Inverted CERN School of Computing, 24-25 February 2014
General overview • Lecture 1 • Machine learning • Introduction • Definition • Problems • Techniques • Lecture 2 • ANN introduction • SOM • Simulation • SOM basedmodels
LECTURE 1 Self organizing maps. A visualization technique with data dimension reduction.
1.2.1. Feedforward NN1.2.2. Recurrent NN1.2.3. Selforganizing NN1.2.4. Others 1.2. Types
1.2.1. FeedforwardNN • Single layerfeedforward
1.2.1. FeedforwardNN • Multi-layerfeedforward • Supervisedlearning • Backpropagationlearningalgorithm
1.2.2. Recurrent neural networks • Elmannetworks • ‘Contextunits’ • Maintainstate
1.2.2. Recurrent neural networks • Hopefieldnetwork • Symmetricconnections • Associativememory
1.2.2. Recurrent neural networks • Modular neural networks • The human brainisnot a massivenetworkbut a collection of smallnetworks
1.2.3. Self-organizingnetworks • Self-organizingnetworks • A set of neuronslearn to mappoints in an input space to coordinates in an output space
1.2.4. Others • Holographicassociativememory • Instantaneouslytrainednetworks • Learning vector quantization • Neuro-fuzzynetworks • …
2.1. Motivation2.2. Goal2.3. Mainproperties2.4. Elements2.5. Algorithm 2. Self-organizingmaps
2.1. Motivation • Topographicmaps • Different sensory inputs (motor, visual, auditory…) are mapped in areas of the cerebral cortex in an orderly fashion
2.1. Motivation • “The spatial location of an output neuron in a topographic map corresponds to a particular domain or feature drawn from the input space” Auditory cortical fields Motor-somatotopicmaps
2.2. Goal • Transform incoming signal of arbitrary dimension into a 1-2-3 dimensional discrete map in a topologically ordered fashion
2.3. Mainproperties • Transform continuos input space to discrete output space • Dimensionreduction • winner-takes-allneuron • Orderedfeaturemap Input with similar characteristics produce similar ouput
2.3.1 Dimensionreduction • Curse of dimensionality(Richard E. Bellman) • The amount of data needed grows exponentially with the dimensionality • Types • Featureextraction • Reduce input data (features vector) • Featureselection • Selectsubset (removeredundant and irrelevant data)
2.4. Elements • …of machine learning • A patternexists • Wedon’tknowhow to solveitmathematically • A lot of data • (a1,b1,..,n1), (a2,b2,..,n2) … (aN,bN,..,nN)
2.4. Elements • Lattice of neurons • Size? • Weights
2.4. Elements • Learningrate • Neighborhoodfunction • Learningratefunction
2.5. Algorithm • Initialization • Input data preprocessing • Normalizing • Discrete-continuous variables? • Weightinitialization • Randomweights
2.5. Algorithm (2) • Sampling • Takesamplefrom input space • Matching • Find BMU: i.e. min of • Updateweights • i.e.
4.1. Digitrecognition4.2. Finishphonetics4.3. Semanticmap of wordcontext 4. Mapexamples
5.1. TASOM5.2. GSOM5.3. MuSOM 5. Other SOM based models
5.1. TASOM • Time adaptativeself-organizingmaps • Dealswith non-stationary input distributions • Adaptativelearningrates: n(w,x) • Adaptativeneighborhoodrates: T(w,x)
5.2. GSOM • Growingself-organizingmaps • DealswithidentifyingsizesforSOMs • Spread factor • New nodes in boundaries • Goodwhenunknownclusters
5.3. MuSOM • Multimodal SOM • High levelclassificationfromsensoryintegration