140 likes | 247 Views
Neuron Networks Omri Winner Dor Bracha. What is a neural network? Collection of interconnected neurons that compute and generate impulses. Components of a neural network include neurons, synapses , and activation functions.
E N D
What is a neural network? Collection of interconnected neurons that compute and generate impulses. Components of a neural network include neurons, synapses, and activation functions. Neural networks modeled by mathematical or computer models are referred to as artificial neural networks. Neural networks achieve functionality through learning/training functions. Introduction
N inputs: x1, ..., xn. • Weights for each input: w1, ..., wn. • A bias input x0 (constant) and associated weight w0. • The output Y is a weighted sum of the inputs: y = w0x0 + w1x1 + ... + wnxn. • A threshold function, i.e: 1 if y > 0, else 0. Perceptron
w1 x1 x0 w0 x2 w2 Σ Threshold . . . 1ify>0 -1otherwise y = Σwixi xn wn Diagram
The key to neural networks is weights. Weights determine the importance of signals and essentially determine the network output. • Training cycles adjust weights to improve the performance of a network. The performance of an ANN is critically dependent on training performance. • An untrained network is basically useless. • Different training algorithms lend themselves better to certain network topologies, but some also lend themselves to parallelization. Learning/Training
Network topologies describe how neurons map to each other. • Many artificial neural networks (ANNs) utilize network topologies that are very parallelizable. Network Topologies
Inputs 3HiddenUnits x1 x2 . . . x960 Outputs Network structure
Training and evaluation of each node in large networks can take an incredibly long time • However, neural networks are "embarrassingly parallel“ • Computations for each node are generally independent of all other nodes Why Parallel Computing?
Typical structure of an ANN: • For each training session • For each training example in the session • For each layer (forward and backward) • For each neuron in the layer • For all weights of the neuron • For all bits of the weight value Possible Levels of Parallelization
● Multilayer feedforwardnetworks Layered network where each layer of neurons only outputs to next layer. ● Feedback networks single layer of neurons with outputs that loops back to the inputs. ● Self-organizing maps (SOM) Forms mappings from a high dimensional space to a lower dimensional space(one layer of neurons) ● Sparse distributed memory (SDM) Special form of a two layer feedforward network that works as an associative memory. Most Commonly Used Algorithms
S.O.M. is producing a low dimensional of the • training samples. • They use neighborhood function to preserve the • topological properties of the input space. • S.O.M. is useful for visualizing. Self OrganizingMap (S.O.M.)
Training Time of 25-node SOM depending on the number of used processors
[1] Tomas Nordstrom. Highly parallel computers for artificial neural networks. 1995. [2] George Dahl , Alan McAvinney and Tia Newhall. Parallelizing neural network training for cluster systems,2008. [3] Hopfield , j.j. and D. Tank. Computing with neural circuits. Vol. ,2008. :pp. 624-633,1986. Appendix