190 likes | 313 Views
An Illustrative Example. Apple/Orange Sorter. Neural Network. Sensors. Sorter. Shape: {1 : round ; -1 : elliptical} Texture: {1 : smooth ; -1 : rough} Weight: {1 : > 1 lb. ; -1 : < 1 lb.}. Apple. Orange. Prototype Vectors. sensors :. Shape: {1 : round ; -1 : elliptical}
E N D
An Illustrative Example
Apple/Orange Sorter Neural Network Sensors Sorter Shape: {1 : round ; -1 : elliptical} Texture: {1 : smooth ; -1 : rough} Weight: {1 : > 1 lb. ; -1 : < 1 lb.} Apple Orange
Prototype Vectors sensors: Shape: {1 : round ; -1 : elliptical} Texture: {1 : smooth ; -1 : rough} Weight: {1 : > 1 lb. ; -1 : < 1 lb.}
Perceptron a = -1, n < 0 hardlims: a = 1, n ≧0
Perceptron (cont.) i.e. W = [-1,1]T (p1, p2) = (-1,2), then n = 2a = hardlims(2) = 1 (p1, p2) = (1,-3), then n = -5a= hardlims(-5) = -1
Feedforward Layer For Orange/Apple Recognition S=2 purelin:a=n
Feedforward Layer (cont.) Why is it called Hamming ? The Hamming distance between two vectors is equal the number of elements that are different. e.g. the Hamming distance between [1,-1,-1] and [1,1,1] is 2 , the Hamming distance between [1,1,-1] and [1,1,1] is 1
Recurrent Layer a = 0, n < 0 poslims: a= n, n ≧0
Hamming Operation First Layer:input
Hamming Operation Second Layer: ε=0.5 橘子
Apple/Orange Problem a = -1, n <-1 satlins: a = n, -1≦n ≦1 a = 1, 1<n
Apple/Orange Problem Test: 橘子
Summary • Perceptron • Feedforward Network • Linear Decision Boundary • One Neuron for Each Decision • Hamming Network • Competitive Network • First Layer – Pattern Matching (Inner Product) • Second Layer – Competition (Winner-Take-All) • # Neurons = # Prototype Patterns • Hopfield Network • Dynamic Associative Memory Network • Network Output Converges to a Prototype Pattern • # Neurons = # Elements in each Prototype Pattern