410 likes | 420 Views
This tutorial session explores the process of parameter searching in neural models, addressing challenges, methods, and approaches to finding optimal parameter values. Topics covered include data collection, model building, match functions, and different search algorithms.
E N D
GUM*02 tutorial sessionUTSA, San Antonio, Texas Parameter searching in neural models Mike Vanier, Caltech
The problem: • you want to build a model of a neuron • you have a body of data • you know a lot about the neuron’s • morphology • physiology • ion channel kinetics • but you don’t know everything!
Typical preliminary data set • anatomy • rough idea of morphology • detailed reconstruction
Typical preliminary data set • physiology • current clamp • synaptic potentials • potentiation • modulators
Typical preliminary data set • ion channels • identities of the major types • kinetics • modulation
Missing data? • ion channels • identities of ALL channels • densities (uS/(um)2) • detailed kinetics • anatomy • detailed reconstructions? variability? • physiology • voltage clamp, neuromodulators, etc. ???
Harsh reality • most experiments not done with models in mind • >half of model parameters loosely constrained or unconstrained • experiments to collect model params are not very sexy
A different approach • collect data set model should match • collect plausible parameters • those known to be correct • educated guesses • build model • test model performance • modify parameters until get match
How to modify parameters? • manually • 10 parameters @ 5 values each: • 9765625 possible simulations • 1 sim/minute = 19 years! • use previous results to guide searching • non-linear interactions? • tedious!!!
How to modify parameters? • automatically • set ranges for each parameter • define update algorithm • start parameter search • go home! • check results in a day, week, ...
match function • need to quantify goodness of fit • reduce entire model to one number • 0 = perfect match • match: • spike rates • spike times • voltage waveform
simple match function • inputs: different current levels • e.g. 0.05, 0.1, 0.15, 0.2, 0.25, 0.3 nA • outputs: spike times
waveform match function • inputs: hyperpolarized current levels • e.g. -0.05, -0.1 nA • outputs: Vm(t)
other match functions • some data might be more important to match than the rest • adaptation • bursting behavior • incorporate into more complex match functions
weight early spikes more • wij: weighting params • set wi0 < wi1 < wi2 < ...
harder match functions • bursting • purkinje cell, pyramidal cell • transitions btw complex behaviors • regular spiking bursting
the data set • need exceptionally clean data set • noise in data set: • model will try to replicate it! • need wide range of inputs
typical data set for neuron model • current clamp over wide range • hyperpolarized (passive) • depolarized (spiking)
the process (1) • build model • anatomy • channel params from lit • match passive data • hyperpolarized inputs
the process (2) • create match function • waveform match for hyperpolarized • spike match for depolarized • run a couple of simulations • check that results aren’t ridiculous • get into ballpark of right params
the process (3) • choose params to vary • channel densities • channel kinetics • minf(V), tau(V) curves • passive params • choose parameter ranges
the process (4) • select a param search method • conjugate gradient • genetic algorithm • simulated annealing • set meta-params for method
the process (5) • run parameter search • periodically check best results • marvel at your own ingenuity • curse at your stupid computer • figure out why it did/didn’t work
parameter search methods • different methods have different attributes • local or global optima? • efficiency? • depends on nature of parameter space • smooth or ragged?
the shapes of space smooth ragged
genesis param search methods • Conjugate gradient-descent (CG) • Genetic algorithm (GA) • Simulated annealing (SA) • Brute Force (BF) • Stochastic Search (SS)
conjugate gradient (CG) • “The conjugate gradient method is based on the idea that the convergence to the solution could be accelerated if we minimize Q over the hyperplane that contains all previous search directions, instead of minimizing Q over just the line that points down gradient. To determine xi+1 we minimize Q over x0 + span(p0,p1,p2,...,pi) where the pk represent previous search directions.”
no, really... • take a point in parameter space • find the line of steepest descent (gradient) • minimize along that line • repeat, sort of • along conjugate directions only • i.e. ignore subspace of previous lines
CG method: good and bad • for smooth parameter spaces: • guaranteed to find local minimum • for ragged parameter spaces: • guaranteed to find local minimum ;-) • not what we want...
genetic algorithm • pick a bunch of random parameter sets • a “generation” • evaluate each parameter set • create new generation • copy the most fit sets • mutate randomly, cross over • repeat until get acceptable results
genetic algorithm (2) • amazingly, this often works • global optimization method • many variations • many meta-params • mutation rate • crossover type (single, double) and rate • no guarantees
simulated annealing • make noise work for you! • noisy version of “simplex algorithm” • evaluate points on simplex • add noise to result based on “temperature” • move simplex through space accordingly • gradually decrease temperature to zero
simulated annealing(2) • some nice properties: • guaranteed to find global optimum • but may take forever ;-) • when temp = 0, finds local minimum • how fast to decrease temperature?
recommendations • Passive models: SA,CG • Small active models: SA • Large active models: SA, GA • Network models: usually SOL
genesis tutorial (1) • task: • parameterize simple one-compt neuron • Na, Kdr , KM channels • objects: • paramtableGA • paramtableSA • paramtableCG
genesis tutorial (2) • parameters: • gmaxof Na, Kdr , KM • KMt(v) scaling • KM minf(v) midpoint
Conclusions • param search algorithms are useful • but: pitfalls, judgment • modeler must help computer • failure is not always bad! • will continue to be active research area