350 likes | 498 Views
Try to save computation time in large-scale neural network modeling with population density methods, or just fuhgeddaboudit ?. Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University Courant Institute of Mathematical Sciences Department of Biology Center for Neural Science
E N D
Try to save computation time in large-scale neural networkmodeling with population density methods, or just fuhgeddaboudit? Daniel Tranchina, Felix Apfaltrer & Cheng Ly New York University Courant Institute of Mathematical Sciences Department of Biology Center for Neural Science Supported by NSF grant BNS0090159
SUMMARY • Can the population density function (PDF) method be made into a practical time-saving computational tool in large-scale neural network modeling? • Motivation for thinking about PDF methods • General theoretical and practical issues • The dimension problem in realistic single-neuron models • Two dimension reduction methods • Moving eigenfunction basis (Knigh, 2000) • only good news • Moment closure method (Cai et al., 2004) • good news; bad news; worse news; good news • Example of a model neuron with a 2-D state space • Future directions
Why Consider PDF Methods in Network Modeling? • Synaptic noise makes neurons behave stochastically: synaptic failure; random sizes of unitary events; random synaptic delay. • Important physiological role: mechanism for modulation of gain/kinetics of population responses to synaptic input; prevents synchrony in spiking neuron models as in the brain. • Important to capture somehow the properties of noise in realistic models. • Large number of neurons required for modeling physiological phenomena of interest, e.g. working memory; orientation tuning in primary visual cortex.
Why Consider PDF Methods (continued) • Number of neurons is determined by the functional subunit; e.g. an orientation hypercolumn in V1: • TYPICAL MODELS: ~ (1000 neurons)/(orientation hypercolumn) for input layer V1 ( 0.5 X 0.5 mm2 or roughly 0.25 X 0.25 deg2 ). • REALITY: ~ 34,000 neurons, 75 million synapses • Many hypercolumns are required to study some problems, e.g. dependence of spatial integration area on stimulus contrast.
Why PDF?(continued) • Tracking by computation the activity of ~103--104 neurons and ~104--106 synapses taxes computational resources: time and memory. e.g. 8 X 8 hypercolumn-model for V1 with 64,000 neurons (Jim Wielaard and collaborators, Columbia): 1 day to simulate 4 seconds real time • But stunning recent progress by Adi Rangan & David Cai What to do? Quest for the Holy Grail: a low-dimensional system of equations that approximates the behavior of a truly high-dimensional system. Firing ratemodel (Dyan & Abbott, 2001): system of ODEs or PDFmodel (system of PDEs or integro-PDEs)?
The PDF Approach • Large number of interacting stochastic units suggests a statistical mechanical approach. • Lump similar neurons into discrete groups. • Example: V1 hypercolumn. Course graining over: position, orientation preference; receptive-field structure (spatial-phase); simple--complex; E vs. I may give ~ 50 neurons/population (~ tens OK for PDF methods). • Each neuron has a set of dynamical variables that determines its state; e.g. for a leaky I&F neuron. • Track the distribution of neurons over state space and firing rate for each population.
Rich History of PDF Methods in Computational Neuroscience • Wilbur & Rinzel 1983 • Kuramoto 1991 • Abbott & van Vreeswijk 1993 • Gerstner 1995 PDF Methods Recently Espoused and Tested as a Faster Alternative to Monte Carlo Simulations • Knight et al., 1996 • Omurtage et al., 2000 • Sirovich et al., 2000 • Casti et al. 2002 • Cai et al., 2004 • Huertas & Smith, 2006
Most applications of PDF methods as a computational tool have involved single-neuron models with a 1-D state space: instantaneous synaptic kinetics; V jumps abruptly up/down with each unitary excitatory/inhibitory synaptic input event. • Synaptic kinetics play an enormously important role in determining neural network dynamics. • Bite the bullet and include realistic synaptic kinetics. • Problem with PDF methods: as underlying neuron model is made more realistic, dimension of the state space increases, so does the computation time to solve the PDF equations. • Time saving advantage of PDF over (direct) MC vanishes. • Minimal I&F model with synaptic kinetics has 3 state variables: voltage, excitatory and inhibitory conductances. Minimal I&F Model: How Many State Variables?
Take Baby Steps: Introduce Dimensions One at a Time and See What We Can Do
PDF vs. MC and Mean-Field for 2-D Problem. PDF cpu time is ~ 400 single uncoupled neurons MC 1000 neurons PDF mean-field 100,000 neurons cpu time: 0.8 s for PDF; 2 s per 1000 neurons for MC.
Computation Time Comparison: PDF vs. Monte Carlo (MC): PDF grows linearly; MC grows quadratically 50 neurons per population;1 run; 25% connectivity
PDF Method is plenty fast for model neurons with a 2-D state space. • More realistic models (e.g. with E and I input) require additional state variables • Explore dimension reduction methods. • Use the 2-D problem as a test problem
Dimension Reduction by Moving Eigenvector Basis: Bowdlerization of Bruce Knight’s (2000) Idea
Dimension Reduction by Moving Eigenvector Basis Example with 1-D state space, instantaneous synaptic kinetics • Only 3 eigenvectors for low, and 7 for high synaptic input rates. • Large time steps • Eigen-method is 60 times faster than full 1-D solution Suggested by Knight, 2000.
Dimension Reduction by Moving Eigenvector Basis Example with 2-D state space: state variables V & Ge • Out of 625 eigenvectors: 10 for high, 30 for medium, and 60 for low synaptic input rates. • Large time steps • Eigen-method is 60 times faster than full 1-D solution
Dimension Reduction by Moment Closure: 2nd Moment Stimulus Firing Rate Response Near perfect agreement between results from dimension reduction by moment closure, and full 2-D PDF method.
ZOOM Dimension Reduction by Moment Closure: 3rd Moment Response to Square-Wave Modulation of Synaptic Input Rate 3rd-moment closure performs better than 2nd at high input rates.
Trouble with Moment Closure and Troubleshooting • Dynamical solutions “breakdown” when synaptic input rates drop below ~ 1240 Hz, where actual firing rate (determined by MC and full 2-D solution) ~ 60 spikes/s. • Numerical problem or theoretical problem? • Is moment closure problem ill-posed for some physiological parameters? • Examine the more tractable steady-state problem
Phase Plane Analysis of Steady-State Moment Closure Problem to Study Existence/Nonexistence of Solutions
must not intersect Phase Plane and Solution at High Synaptic Input Rate solution trajectory must intersect
trajectory 1 must intersect must not intersect trajectory 2 Steady-State Solution Doesn’t Exist for Low Synaptic Input Rate
Promise of a New Reduced Kinetic Theory with Wider Applicability, Using Moment Closure A numerical method on a fixed voltage grid that introduces a boundary layer with numerical diffusion finds solutions in good agreement with direct simulations. (Cai, Tao, Shelley, McLaughlin, 2004)
SUMMARY • PDF methods show promise • Small population size OK, but connectivity cannot be dense • Realistic synaptic kinetics introduce state-space variables • Time saving benefit lost when state space dimension is high • Dimension reduction methods could maintain efficiency: • Moving eigenvector basis speeds up 2-D PDF method 60 X • Moment closure method (unmodified) has existence problems • Numerical implementations suggest moment closure can work well • Challenge is to find methods that work for >= 3 dimensions
THANKS • Bruce Knight • Charles Peskin • David McLaughlin • David Cai • Adi Rangan • Louis Tao • E. Shea-Brown • B. Doiron • Larry Sirovich
Edges of parameter space: Minimal input rate: Minimal EPSP with fixed mean G: fix at mean-field threshold, increase EPSP ( ) until solution exists