160 likes | 241 Views
PSY105 Neural Networks 2/5. 2. “A universe of numbers”. Lecture 1 recap. We can describe patterns at one level of description that emerge due to rules followed at a lower level of description.
E N D
PSY105 Neural Networks 2/5 2. “A universe of numbers”
Lecture 1 recap • We can describe patterns at one level of description that emerge due to rules followed at a lower level of description. • Neural network modellers hope that we can understand behaviour by creating models of networks of artificial neurons.
Warren McCullock 1943 - First artificial neuron model Warren McCulloch (neurophysiologist) Walter Pitts (mathematician)
input weight activation Threshold A simple artificial neuron Threshold logic unit (TLU) Add Multiply inputs by weights and add. If the sum is larger than a threshold output 1, otherwise output 0
output 1 activation 0 threshold TLU: the output relation The relation is non-linear – small changes in activation give different changes in the output depending on the initial activation
input weight activation Squashing function Add Semilinear node
output 1 activation 0 threshold Semilinear node: the output relation (squashing function)
Model neuron function, reminders… • Inputs vary, they can be 0 or 1 • Weights change, effectively ‘interpreting’ inputs • There is a weight for each input • This can be a +ve number (excitation) or a –ve number (inhibition) • Weights do not change when inputs change • Activation = weighted sum of inputs • Activation = input1 x weight1 + input2xweight2 etc • If activation>threshold, output = 1, otherwise output=0 • Threshold = 1
Computing with neurons: identify (1) input output weight Act. ? X State 1 State 2 Threshold = 1
Computing with neurons: identity (2) input output weight Act. ? State 1 State 2 Threshold = 1
Computing with neurons: AND inputs output weights Act. ? State 1 State 2 State 3 State 4 Threshold = 1, Weight 1 = 0.5, Weight 2 = 0.5
Networks of such neurons are Turing complete 1912 - 1954
Question: How could you use these simple neurons (TLUs) to compute the NOR (‘NOT OR’) function?
Computing with neurons: NORa clue inputs output weights Input 1 (varies) Act. ? Input 2 (varies) Tonically active Input (always = 1)
Model neuron function, reminders… • Inputs vary, they can be 0 or 1 • Weights change, effectively ‘intepreting’ inputs • There is a weight for each input • This can be a +ve number (excitation) or a –ve number (inhibition) • Weights do not change when inputs change • Activation = weighted sum of inputs • Activation = input1 x weight1 + input2xweight2 etc • If activation>threshold, output = 1, otherwise output=0 • Threshold = 1
Computing with neurons: NORone way inputs output weights Input 1 (varies) Act. ? Input 2 (varies) Tonically active Input (always = 1) Threshold = 1, Weight 1 = -1, Weight 2 = -1 Weight 3 = +1