320 likes | 532 Views
Neurons and Neural Networks. Neurons and Neural Nets. How do neurons function? Synapses and axons, Ion channels, neuro-transmittors How do computer scientists model neural nets? Similarities and differences from real neurons and brains. Neurons. Cell body
E N D
Neurons and Neural Nets • How do neurons function? • Synapses and axons, • Ion channels, neuro-transmittors • How do computer scientists model neural nets? • Similarities and differences from real neurons and brains
Neurons • Cell body • Dendrites receive neuronal inputs • Axon transmits neuronal outputs • Electrochemical transmission occurs across synapses between axons and dendrites
Neuron Types • Different neurons have different structures
What does a neuron do? • “integrate and fire” • Electrical current from other neurons (via dendrites) is added together • Depends on strength of synaptic connection • When the charge exceeds a threshold, a spike of current is sent down the axon • May trigger other firings • Switching time is 1-10 milliseconds
Whole cell patch clamp http://www.iac-usnc.org/Methods/wholecell/equipment.html
More neuron details • Rate of firing depends on input strength • Refractory (recovery) period after firing • Background firing rates • Both excitatory and inhibitory inputs • Synaptic weights change with experience • Strengthen frequently used synapses • Learning!!!
Neuro-transmitters • Small molecules synthesized in neurons • Released into synapses • Later removed • Examples • Dopamine • Norepinephrine • Epinephrine • Histamine • Serotonin • Acetylcholine, glutamtate, glycine
Neuron Learning • Synaptic connection strengths change with experience • Neuron connectivity also changes • Up to 80% of the neurons within forming nervous system die. • This ensures that adequate numbers of neurons establish appropriate connections • New neuron connections can be established
A brief history of Neurons • Ramon y Cajal, 1880s • Brain is composed of discrete cells (neurons) • Adrian and Zotterman, 1920s • Record electrical impulses on axons • Hodgkin and Huxley, 1952 • Detailed model of how spikes are generated • Hubel and Wiesel, 1968 • “feature detection” = selectivity of neurons
Brains vs. Computers • In theory could build brains out of electronic circuitry and visa versa • People are working on this! • Brains and (most) computers differ • Centralized vs. distributed computing • Sequential vs. parallel processing • Fast vs. slow components • Separation vs. integration of memory and computation • Explicit programming vs. learning
Biological Computing • Consider visual scene recognition • Neuronal firing time ~ 10 milliseconds • Recognition occurs in 100-200 milliseconds • At most 10-20 “iterations” • How is this possible? • We’ll see later!
Artificial neural networks • Highly simplified models of neurons • Often combined in layers • Receive inputs, calculate features of them “in parallel” • Learn by adjusting weights • Many different types • Backpropagation, Radial Basis Function (RBF), Hopfield, Bolzmann, Perceptron, CMAC, Kohonen, …
Artificial Neural Networks • Nodes • Processing elements that sum and threshold inputs • Like neurons • Links • weighted connections • With adjustable parameters Hidden layer inputs outputs
Artificial neurons are artificial Artificial neurons are different from real neurons
“Backpropagation” Network The figure represents a big equation.
Network Training Goal: Adjust weights to fit data Non-linear optimization problem Use gradient descent backpropagation conjugate gradient methods Other optimization methods sequential quadratic programming (SQP) random search Also need to select structure (links)
Automatic model construction Needed when equations are unknown Handles nonlinearities well Much lower development time than other models or correlations Potential massive parallelism All computations are local Implemented in parallel hardware But most neural nets use serial machines Robustness Inputs can be precise Training data can contain errors Individual neurons are redundant Adaptivity Network can be updated as the process changes Neural Network Advantages
Mostly from very general form with many parameters “Training” can be slow Over-generalization with 5 parameters one can fit an elephant with 1000, one can fit a whole herd of elephants! Neural Network Limitations
Neural Networks for Mobile Robot Guidance Autonomous Land Vehicle in a Neural Network (ALVINN)
ALVINN – Input Representation Input sample 3% of pixels remove shadows by using color constancy normalize brightness average locally
ALVINN – Output Representation Output discretized angles,train on Gaussian output distribution, pick center of best fit Gaussian Single real-valued output is easier to learn but if half the evidence says “turn left” and half says “turn right”, one should not go straight Multiple outputs gives measure of reliability
Alvin – Training (1) Training data – five minutes “watching” a human driver Transform image to capture different angles transform steering direction as well produces training for situations never experienced Online training keep diverse set of images, esp. those with high error
ALVINN – Training (2) Train with structured noise to handle “unexpected” variations
ALVINN - Performance Word record for autonomous robot crossed country with minimal human intervention Can drive on many types of road Morals Neural nets good for high dimensional inputs Use knowledge of problem to preprocess data
OCHRE ANN Demo • Neural network architecture • The challenge of neural memorization • Learning and generalization • Overfitting http://sund.de/netze/applets/BPN/bpn2/ochre.html
Summary • Neurons “integrate” signals from other neurons • Results are transmitted over synapses • Neurons “learn” • Simple artificial neural networks learn • But do not capture the complexity of real neurons and brains