1 / 59

Lecture 25

Lecture 25. Artificial Intelligence: Recognition Tasks (S&G, §13.4). The Cognitive Inversion. Computers can do some things very well that are difficult for people e.g., arithmetic calculations playing chess & other board games doing proofs in formal logic & mathematics

stevep
Download Presentation

Lecture 25

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 25 Artificial Intelligence: Recognition Tasks (S&G, §13.4) CS 100 - Lecture 24

  2. The Cognitive Inversion • Computers can do some things very well that are difficult for people • e.g., arithmetic calculations • playing chess & other board games • doing proofs in formal logic & mathematics • handling large amounts of data precisely • But computers are very bad at some things that are easy for people (and even some animals) • e.g., face recognition & general object recognition • autonomous locomotion • sensory-motor coordination • Conclusion: brains work very differently from Von Neumann computers CS 100 - Lecture 24

  3. The 100-Step Rule • Typical recognition tasks take less than one second • Neurons take several milliseconds to fire • Therefore then can be at most about 100 sequential processing steps CS 100 - Lecture 24

  4. Von Neumann: Narrow but Deep Neural Computation: Shallow but Wide … Two Different Approaches to Computing CS 100 - Lecture 24

  5. Retina: 100 million receptors Optic nerve: one million nerve fibers How Wide? CS 100 - Lecture 24

  6. A Very Brief Tour ofReal Neurons (and Real Brains) CS 100 - Lecture 24

  7. CS 100 - Lecture 24 (fig. from internet)

  8. Left Hemisphere CS 100 - Lecture 24

  9. Typical Neuron CS 100 - Lecture 24

  10. Grey Matter vs. White Matter CS 100 - Lecture 24 (fig. from Carter 1998)

  11. Neural Density in Cortex • 148 000 neurons / sq. mm • Hence, about 15 million / sq. cm CS 100 - Lecture 24

  12. Relative Cortical Areas CS 100 - Lecture 24

  13. Information Representationin the Brain CS 100 - Lecture 24

  14. Macaque Visual System CS 100 - Lecture 24 (fig. from Van Essen & al. 1992)

  15. Hierarchy of Macaque Visual Areas CS 100 - Lecture 24 (fig. from Van Essen & al. 1992)

  16. Bat Auditory Cortex CS 100 - Lecture 24 (figs. from Suga, 1985)

  17. Real Neurons CS 100 - Lecture 24

  18. Typical Neuron CS 100 - Lecture 24

  19. Dendritic Trees of Some Neurons • inferior olivary nucleus • granule cell of cerebellar cortex • small cell of reticular formation • small gelatinosa cell of spinal trigeminal nucleus • ovoid cell, nucleus of tractus solitarius • large cell of reticular formation • spindle-shaped cell, substantia gelatinosa of spinal chord • large cell of spinal trigeminal nucleus • putamen of lenticular nucleus • double pyramidal cell, Ammon’s horn of hippocampal cortex • thalamic nucleus • globus pallidus of lenticular nucleus CS 100 - Lecture 24 (fig. from Trues & Carpenter, 1964)

  20. Layers and Minicolumns CS 100 - Lecture 24 (fig. from Arbib 1995, p. 270)

  21. Neural Networks in Visual System of Frog CS 100 - Lecture 24 (fig. from Arbib 1995, p. 1039)

  22. Frequency Coding CS 100 - Lecture 24 (fig. from Anderson, Intr. Neur. Nets)

  23. Variations in Spiking Behavior CS 100 - Lecture 24

  24. Chemical Synapse • Action potential arrives at synapse • Ca ions enter cell • Vesicles move to membrane, release neurotransmitter • Transmitter crosses cleft, causes postsynaptic voltage change CS 100 - Lecture 24 (fig. from Anderson, Intr. Neur. Nets)

  25. Neurons are Not Logic Gates • Speed • electronic logic gates are very fast (nanoseconds) • neurons are comparatively slow (milliseconds) • Precision • logic gates are highly reliable digital devices • neurons are imprecise analog devices • Connections • logic gates have few inputs (usually 1 to 3) • many neurons have >100 000 inputs CS 100 - Lecture 24

  26. Artificial Neural Networks CS 100 - Lecture 24

  27. connectionweights inputs Typical Artificial Neuron output threshold CS 100 - Lecture 24

  28. summation activationfunction net input Typical Artificial Neuron CS 100 - Lecture 24

  29. 0 2 S 1 –1 4 2 1 Operation of Artificial Neuron 0  2 = 0 1  1 = –1 1  4 = 4 3 3 > 2 1 CS 100 - Lecture 24

  30. 1 2 S 1 –1 4 2 0 Operation of Artificial Neuron 1  2 = 2 1  1 = –1 0  4 = 0 1 1 < 2 0 CS 100 - Lecture 24

  31. input layer output layer hidden layers Feedforward Network . . . . . . . . . . . . . . . . . . CS 100 - Lecture 24

  32. Feedforward Network In Operation (1) input . . . . . . . . . . . . . . . . . . CS 100 - Lecture 24

  33. Feedforward Network In Operation (2) . . . . . . . . . . . . . . . . . . CS 100 - Lecture 24

  34. Feedforward Network In Operation (3) . . . . . . . . . . . . . . . . . . CS 100 - Lecture 24

  35. Feedforward Network In Operation (4) . . . . . . . . . . . . . . . . . . CS 100 - Lecture 24

  36. Feedforward Network In Operation (5) . . . . . . . . . . . . . . . . . . CS 100 - Lecture 24

  37. Feedforward Network In Operation (6) . . . . . . . . . . . . . . . . . . CS 100 - Lecture 24

  38. Feedforward Network In Operation (7) . . . . . . . . . . . . . . . . . . CS 100 - Lecture 24

  39. Feedforward Network In Operation (8) output . . . . . . . . . . . . . . . . . . CS 100 - Lecture 24

  40. Comparison with Non-Neural Net Approaches • Non-NN approaches typically decide output from a small number of dominant factors • NNs typically look at a large number of factors, each of which weakly influences output • NNs permit: • subtle discriminations • holistic judgments • context sensitivity CS 100 - Lecture 24

  41. Connectionist Architectures • The knowledge is implicit in the connection weights between the neurons • Items of knowledge are not stored in dedicated memory locations, as in a Von Neumann computer • “Holographic” knowledge representation: • each knowledge item is distributed over many connections • each connection encodes many knowledge items • Memory & processing is robust in face of damage, errors, inaccuracy, noise, … CS 100 - Lecture 24

  42. Differences fromDigital Calculation • Information represented in continuous images (rather than language-like structures) • Information processing by continuous image processing (rather than explicit rules applied in individual steps) • Indefiniteness is inevitable (rather than definiteness assumed) CS 100 - Lecture 24

  43. Supervised Learning • Produce desired outputs for training inputs • Generalize reasonably & appropriately to other inputs • Good example: pattern recognition • Neural nets are trained rather than programmed • another difference from Von Neumann computation CS 100 - Lecture 24

  44. 0 2 S 1 –1 4 2 1 Learning for Output Neuron (1) Desired output: 1 3 3 > 2 1 No change! CS 100 - Lecture 24

  45. 0 2 S 1 –1 4 2 1 Learning for Output Neuron (2) Desired output: 0 3 –1.5 X 3 > 2 XXX 1.75 < 2 X 1.75 X 0 1 Output is too large X 3.25 CS 100 - Lecture 24

  46. 1 2 S 1 –1 4 2 0 Learning for Output Neuron (3) Desired output: 0 1 1 < 2 0 No change! CS 100 - Lecture 24

  47. 1 2 S 1 –1 4 2 0 Learning for Output Neuron (4) Desired output: 1 X 2.75 1 –0.5 X 1 < 2 XXX 2.25 > 2 X 2.25 X 1 0 Output is too small CS 100 - Lecture 24

  48. . . . input layer output layer hidden layers Credit Assignment Problem How do we adjust the weights of the hidden layers? Desired output . . . . . . . . . . . . . . . . . . CS 100 - Lecture 24

  49. . . . . . . Back-Propagation:Forward Pass . . . . . . . . . . . . . . . Desired output Input CS 100 - Lecture 24

  50. . . . Back-Propagation:Actual vs. Desired Output . . . . . . . . . . . . . . . . . . Desired output Input CS 100 - Lecture 24

More Related