610 likes | 1.3k Views
Artificial Spiking Neural Networks. Sander M. Bohte CWI Amsterdam The Netherlands. Overview. From neurones to neurons Artificial Spiking Neural Networks (ASNN) Dynamic Feature Binding Computing with spike-times Neurons-to-neurones Computing graphical models in ASNN Conclusion.
E N D
Artificial Spiking Neural Networks Sander M. Bohte CWI Amsterdam The Netherlands
Overview • From neurones to neurons • Artificial Spiking Neural Networks (ASNN) • Dynamic Feature Binding • Computing with spike-times • Neurons-to-neurones • Computing graphical models in ASNN • Conclusion
Of neurones and neurons • Artificial Neural Networks • (neuro)biology -> Artificial Intelligence (AI) • Model of how we think the brain processes information • New data on how the brain works! • Artificial Spiking Neural Networks
Real Neurons • Real cortical neurons communicate with spikes or action potentials
Real Neurons • The artificial sigmoidal neuron models the rate at which spikes are generated • artificial neuron computes function of weighted input:
Artificial Neural Networks • Artificial Neural Networks can: • approximate any function • (Multi-Layer Perceptrons) • act as associative memory • (Hopfield networks, Sparse Distributed Memory) • learn temporal sequences • (Recurrent Neural Networks)
ANN’s • BUT.... • for AI neural networks are not competitive • classification/clustering • ... or not suitable • structured learning/representation (“binding” problem, e.g. grammar) • and scale poorly • networks of networks of networks... • for understanding the brain the neuron model is wrong • individual spikes are important, not just rate
Dynamic Feature Binding • “bind” local features into coherent percepts:
Binding • representing multiple objects? • like language without grammar! (i.e. no predicates)
Binding • Conjunction coding:
Binding • Synchronizing spikes?
New Data! • neurons belonging to same percept tend to synchronize (Gray & Singer, Nature 1987) • timing of (single) spikes can be remarkably reproducible • fly: same stimulus (movie) • same spike ± < 1ms • Spikes are rare: average brain activity < 1Hz • “rates” are not energy efficient
Computing with Spikes • Computing with precisely timed spikes is more powerful than with “rates”. (VC dimension of spiking neuron models) [W. Maass and M. Schmitt., 1999] • Artificial Spiking Neural Networks??[W. Maass Neural Networks, 10, 1997]
Artificial Spiking Neuron • The “state” (= membrane potential) is a weighted sum of impinging spikes • spike generated when potential crosses threshold, reset potential
Artificial Spiking Neuron • Spike-Response Model: • where ε(t) is the kernel describing how a single spike changes the potential:
Artificial Spiking Neural Network • Network of spiking neurons:
Error-backpropagation in ASNN • Encode “X-OR” in (relative) spike-times
XOR in ASNN • Change weights according to gradient descent using error-backpropagation (Bohte etal, Neurocomputing 2002) • Also effective for unsupervised learning(Bohte etal, IEEE Trans Neural Net. 2002)
Computing Graphical Models • What kind of intelligent computing can we do? • recent work: computing Hidden Markov Models in noisy recurrent ASNN(Rao, NIPS 2004, Zemel etal, NIPS 2004)
From Neurons to Neurones • artificial spiking neurons are fairly accurate model of real neurons • learning rules -> predictions for real neuronal behavior • example: reducing response variance in stochastic spiking neuron yields learning rule like biology (Bohte & Mozer, NIPS 2004)
STDP from variance reduction • neurons fire stochastically as a function of membrane potential • Good idea to minimize response variability: • response entropy: • gradient:
STDP? • Spike-timing dependent plasticity:
Variance Reduction • Simulate STDP experiment (Bohte&Mozer,2005): • predicts dependence shape STDP -> neuron parameters
STDP -> ASNN • Variance reduction replicates experimental results. • Suggests: learning in ASNN based on • (mutual) information maximization • minimum description length (MDL)(based on similar entropy considerations) • Suggests: new biological experiments
Hidden Markov Model • Bayesian inference in simple single level (Rao, NIPS 2004): • hidden state of model at time t
Let be the observable output at time t • probability: • forward component of belief propagation:
Bayesian SNN • Recurrent spiking neural network:
Bayesian SNN • Current spike-rate: • The probability of spiking is directly proportional to the posterior probability of the neuron’s preferred state and the current input given all past inputs • Generalizes to Hierarchical Inference
Conclusion • new neural networks: Artificial Spiking Neural Networks • can do what traditional ANN’s can • we are researching how to use these networks in more interesting ways • many open directions: • Bayesian inference / graphical models in ASNN • MDL/information theory based learning • distributed coding for binding problem in ASNN • applying agent-based reward distribution ideas to scale learning in large neural nets