1 / 68

Biologically Inspired Intelligent Systems

Biologically Inspired Intelligent Systems. Lecture 4 Dr. Roger S. Gaborski. Learning and inference in the brain Karl Friston * . Fig. 1. Schematic illustrating hierarchical structures in the brain and the distinction between forward, backward and lateral connections. QUIZ TODAY. Textbook.

gaerwn
Download Presentation

Biologically Inspired Intelligent Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Biologically Inspired Intelligent Systems Lecture 4 Dr. Roger S. Gaborski Roger S. Gaborski

  2. Learning and inference in the brain Karl Friston* • Fig. 1. Schematic illustrating hierarchical structures in the brain and the distinction between forward, backward and lateral connections. Roger S. Gaborski

  3. QUIZ TODAY Roger S. Gaborski

  4. Textbook • Essentials of Metaheuristics • Sean Luke, George Mason University • Available online at no cost: http://cs.gmu.edu/~sean/book/metaheuristics/ Some lecture material taken from pp1-49 Roger S. Gaborski

  5. Neuron Models Roger S. Gaborski

  6. Artificial Neurons vk = bk*1 + x1*w1 + x2*w2 + …xn*wn Redefine vk as net and bk as simply b net = b + ∑ xi * wi The activation function, f, can take several forms, including The identity function, binary step function, bipolar step function, sigmoid function or a bipolar sigmoid function Roger S. Gaborski

  7. Artificial Neural Networks/Neural Network Basicshttp://en.wikibooks.org/wiki/Artificial_Neural_Networks/Neural_Network_Basics Roger S. Gaborski

  8. 2 Input Neuron Decision Boundary f(net) = 1 if net ≥ 0 = -1 if net < 0 Where net = b + ∑ xi * wi where i = 1 or 2 Decision Boundary: The line separating the positive and negative output values of the net as a function of the weights w1 and w2 b + x1w1+ x2w2 = 0 Assuming w2 ≠ 0, x2 = - (w1/w2)x1- (b/w2) Roger S. Gaborski

  9. 2 Input Neuron Decision Boundary Requirement for positive output: b + x1w1+ x2w2 > 0 During training the values for w1 , w2 and b are determined so that the neuron will have a positive output (correct response) during training Roger S. Gaborski

  10. Response for AND Function INPUT (x1 , x2 ) Output (t) (1,1) +1 (1, -1) -1 (-1,1) -1 (-1,-1) -1 Assume weights have been determined to be: b = -1, w1 = 1 and w2 = 1 b + x1w1+ x2w2 = 0, x2 = - (w1/w2)x1- (b/w2) x2 = -x1 –(-1/1) = -x1 +1 Roger S. Gaborski

  11. Response for given weights (AND) x2 x2 = -x1 +1 Decision Boundary x1 x2 0 +1 -1 +2 +1 0 Responses + 2 1 x1 1 2 -1 -1 - Roger S. Gaborski

  12. Response for OR Function INPUT (x1 , x2 ) Output (t) (1,1) +1 (1, -1) +1 (-1,1) +1 (-1,-1) -1 Assume weights have been determined to be: b = 1, w1 = 1 and w2 = 1 b + x1w1+ x2w2 = 0, x2 = - (w1/w2)x1- (b/w2) x2 = -x1 –(1/1) = -x1 -1 Roger S. Gaborski

  13. Response for given weights (OR) x2 x2 = -x1 -1 Decision Boundary x1 x2 0 -1 -1 0 +1 -2 Responses + 2 1 x1 1 2 -1 -1 - Roger S. Gaborski

  14. Donald Hebb In 1949, Donald Hebb wrote: ``When the axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased” Roger S. Gaborski

  15. Hebbian Unsupervised Learning • Used to adjust weights between neurons • The weight represents a relationship between the neurons • A large positive weight implies the firing of the first neuron results in the firing of the second neuron • A large negative weight implies the firing of the first neuron inhibits the firing of the second node Roger S. Gaborski

  16. How do we find the weight values? Training Neurons – HEBB Rule • Initialize weights to 0 • Input training vector and output pair, s:t • For each training pair • For inputs: xi = si • For output: y = t • Adjust the weights: • wi(new) = wi(old) + xiy • Adjust bias: b(new) = b(old)+y Roger S. Gaborski

  17. Learning the AND Function Bipolar inputs and outputs (x1 x2 1 ) Target (1 1 1 ) 1 (1 -1 1 ) -1 (-1 1 1 ) -1 (-1 -1 1 ) -1 For each training input, target pair weight change is product of the input vector and the target vector ∆w1 = x1t ∆w2 = x2t ∆b = t Use only one iteration of training data Roger S. Gaborski

  18. Bipolar AND Training Input Target Weight Changes New Weights (x1 x2 1 ) ∆w1 ∆w2 ∆b w1 w2 b 0 0 0 ( 1 1 1 ) 1 1 1 1 1 1 1 ( 1 -1 1 ) -1 -1 1 -1 0 2 0 ( -1 1 1 ) -1 1 -1 -1 1 1 -1 ( -1 -1 1 ) -1 1 1 -1 2 2 -2 x2 = - (w1/w2)x1- (b/w2) x2 = -(2/2)x1 – (-2/2) = -x1 +1 Roger S. Gaborski

  19. Hebb Learning %OR data (x1 x2 1 ) Target inp = [ 1 1 1; 1 -1 1; -1 1 1; -1 -1 1]; tar = [ 1; 1; 1; -1]; W = [ 0, 0, 0]; %w1, w2, b %Initialize weights to 0 %Pass through the data only once, updating the weights %for each data point for i=1: size(inp,1) for k=1:size(inp,2) W(k) = W(k)+ inp(i,k)*tar(i); end W end NOTE: THIS CODE CONTAINS LOOPS, DO NOT USE THIS CODE TO ANSWER HW1 – use matrix operations Roger S. Gaborski

  20. Test Results %Check if response is correct %W contains final weight matrix %inp is the input data %tar is the target data for cnt=1:size(inp,1) YY(cnt) = W(3)+W(1)*inp(cnt,1)+W(2)*inp(cnt,2); end result = sign(YY) (%bipolar threshold) Roger S. Gaborski

  21. Matrix Representation • Weight Calculation: W(new) = w(initial) + targets*inputs • Testing: net = b + ∑ xi * wi • Response = fcn(net) • Let fcn be the bipolar threshold (sign) +1 Input value 0 -1 Roger S. Gaborski

  22. Feed Forward Operation For OR: w0 = 1, w1 = 1, w2 = 1 net = w0 + ∑ xi * wi (redefine b = w0) Response = fcn(net) net = w0 + x1*w1 + x2*w2 redefine b = w0 net = [ w0 w1 w2 ] * 1 x1 x2 Response = f(net) = f[ w0 w1 w2 ] * 1 x1 x2 Roger S. Gaborski

  23. Feed Forward Operation net = w0 + x1*w1 + x2*w2 net = [ 1 1 1 ] * +1 -1 -1 Response = sign(net) = sign [ 1 1 1 ] * 1 -1 -1 Response = sign(1) = -1 Roger S. Gaborski

  24. Matrix of Data Inputs weights = -1 1 1 >> data = [1 -1 1; 1 -1 -1; 1 1 1; 1 1 -1]' data = 1 1 1 1bias -1 -1 1 1x1 value 1 -1 1 -1x2 value Responses = sign(weights*data) = -1 -1 1 -1 Roger S. Gaborski

  25. Human Visual System – sensory input The cornea and lens together focus images on the retina. The retina is part of the central nervous system http://faculty.washington.edu/chudler Roger S. Gaborski

  26. Retina • Five types of neurons: • Photoreceptors • Bipolar cells • Ganglion cells • Horizontal cells • Amacrine • Information Flow: photoreceptor bipolar cell ganglion cell (outputs spike train) • Only the ganglion cells spike in the retina Roger S. Gaborski

  27. Photoreceptors:Rods and Cones • Two types of photoreceptors –rods and cones • Rods have very low spatial resolution, but extremely sensitive to light – allows us to see at night in starlight conditions • Cones have high spatial resolution, but relatively insensitive to light – responsible for our color vision Roger S. Gaborski

  28. Radio Frequency Spectrum • 531 559 • Cone Peak Responses Roger S. Gaborski

  29. Cone Responses Rods respond to a wide range of wavelengths Roger S. Gaborski

  30. Retina Diagram Roger S. Gaborski

  31. Fovea A few remarks about rod and cone spatial distribution Roger S. Gaborski www.undergrad.ahs.uwaterloo.ca/~tbolton/

  32. Roger S. Gaborski www.undergrad.ahs.uwaterloo.ca/ ~tbolton/

  33. Information Flow • Each photoreceptor (rod or cone) does not feed directly to the visual cortex • A number of photoreceptors are connected to a ganglion cell whose axon forms part of the optical nerve • The collection of photoreceptors connected to a particular ganglion cell forms that cell’s receptive field • A photoreceptor may be connected to more than one ganglion cell Roger S. Gaborski

  34. Receptive Fields www.yorku.ca/ eye Roger S. Gaborski

  35. Two Types of Retinal Ganglion Cell Receptive Fields On Center Off Surround (Maximum response: white spot on a black background Off Center On Surround (Maximum response: black spot on a white background Roger S. Gaborski

  36. Response of On Center to a Spot of Light • In darkness the ganglion cell fires at a ‘spontaneous’ rate • When RF is stimulated with a small diameter light spot the cell increases its firing rate – this continues to increase until the light reaches the edge of the on center region • When the spot is increased further and light strikes the inhibitory surround, the firing rate begins to decrease • It continues to decrease until the whole surround is covered with light Roger S. Gaborski

  37. Simple Center Surround Receptive Field MODEL: output Ganglion Cell : Rod or Cone Positive Weight Negative Weight Ganglion Cell Roger S. Gaborski

  38. Receptive Fields Different sizes, center on or off and overlap • - One photo-receptive cell (rod or cone) may be a member • of several receptive fields • Receptive fields are modeled by Difference of Gaussians • The output of the ganglion cells form the optic nerve Roger S. Gaborski

  39. SUMMARIZE: Retina - Receptive Field Model • Light travels through layers of the retina cells and strikes the cones and rods in the receptive layer • Spatially local collections of rods or cones form receptive fields • The receptive field of a neuron can be defined as the area on the retina from which the activity of a neuron can be influenced by the light on the retina area Roger S. Gaborski

  40. Models – Receptive FieldsImplementation W1 W2 w3 Analog inputs and outputs f Positive weights – excitatory Negative weights - inhibitory Roger S. Gaborski

  41. Linear-Nonlinear Model (LN) • Early visual processing • Response of neuron: • Dot product of image and linear filter • Output of linear filter is passed to a non-linear function • Output of non-linear filter is neuron firing rate Roger S. Gaborski

  42. Neuron Firing Rate Filter Output Basic Neuron Models Output Spikes Fires strongest when Image matches linear filter Linear Filter Non-linear Function Roger S. Gaborski

  43. Simplified Example - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + - - - - - - - - - - - + + + + + - - - - - - - - - - - + + + + + - - - - - - - - - - - + + + + + - - - - - - - - - - - + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Roger S. Gaborski

  44. Simplified Example – case 1 Point by point multiply and sum results - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + - - - - - - - - - - - + + + + + - - - - - - - - - - - + + + + + - - - - - - - - - - - + + + + + - - - - - - - - - - - + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + - - - - - - - - - - - + + + + + - - - - - - - - - - - + + + + + - - - - - - - - - - - + + + + + - - - - - - - - - - - + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Linear Filter Image Maximum Response because - * - = + and + * + = + (large positive number) * = Roger S. Gaborski

  45. Simplified Example – case 2 Point by point multiply and sum results ++++++++++++++++++ ++++++++++++++++++ ++++++++++++++++++ +++++ - - - - - +++++ +++++ - - - - - +++++ +++++ - - - - - +++++ +++++ - - - - - +++++ +++++ - - - - - +++++ ++++++++++++++++++ ++++++++++++++++++ ++++++++++++++++++ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + - - - - - - - - - - - + + + + + - - - - - - - - - - - + + + + + - - - - - - - - - - - + + + + + - - - - - - - - - - - + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Linear Filter Image Minimum Response because - * + = - and + * - = - (large negative number) * = Roger S. Gaborski

  46. Filter Responses Maximum Neuron Response (large positive number) Minimum Neuron Response (large negative number) Linear Filter Image key: black = -1, white = +1 Roger S. Gaborski

  47. Basic Neuron Models Squaring function Neuron Firing Rate Output Spikes Filter Output 0 Fires strongest when Image matches linear filter (no output for negative filter output) Linear Filter Non-linear Function Roger S. Gaborski

  48. Basic Neuron Models Squaring function Neuron Firing Rate Output Spikes Filter Output Fires strongest when Image matches linear filter Linear Filter Non-linear Function Roger S. Gaborski

  49. Nonlinear function: y=x2 x: Filter output value y: neuron firing rate Roger S. Gaborski

  50. Phase Shift Roger S. Gaborski

More Related