110 likes | 249 Views
Simple Perceptrons. Or one-layer feed-forward networks. Perceptrons or Layered Feed-Forward Networks. Equation governing comp of simple perceptron. activation function, usually nonlinear, e.g. step function or sigmoid. ksi. Threshold or no threshold?. with threshold.
E N D
Simple Perceptrons Or one-layer feed-forward networks
Equation governing comp of simple perceptron activation function, usually nonlinear, e.g. step function or sigmoid ksi
Threshold or no threshold? with threshold without threshold; threshold simulated with connections to an input terminal permanently tied to -1
The General Association (Matching) Task: Is to ask for: actual output pattern = target pattern
Threshold Units • Start with simplest threshold unit, practical for 1-level perceptrons • Also assume the targets have plus/minus 1 values and no values in between those extremes, that is, • Then all that matter is that for each input pattern, the net input (weighted sum) h to each output unit has the same sign as the target zeta
A Notational Simplification • To simplify notation, note that the output units are independent • [In a multilayer nn, however, the hidden (non-output) layers aren’t independent] • So let’s consider only one output at a time • Drop the i subscripts Weights and each input pattern live in the same space. Advantage: can geometrically represent these two vectors together.
New Form for General Association Task: geometric interpretation Another form:
A simple learning algorithm • Also called the Perceptron Rule • Go through the input patterns one by one • For each pattern go through the output units one by one, asking whether output is the desired one. • If so, leave the weight into that unit alone • Else in the spirit of Hebb add to each connection something proportional to product of the input and desired output
Simplified Simple Learning Algorithm(for one neuron case) • Start with w = 0 (not necessary) • Cycle through the learning patterns • For each pattern ksi • If the output (O) != desired output (zeta), add product of the desired output and the input to w. (i.e., w = w + z*x) • Keep cycling through the patterns until done. • Convergence is guaranteed provided the two classes of input points are linearly separable. • Perceptron convergence theorem guarantees this
Weight Update Formula,“Hebbian” from blue book, too complicated