350 likes | 490 Views
The principles of DCM. Vladimir Litvak. Wellcome Trust Centre for Neuroimaging. Epigraph. What I cannot create I do not understand. Richard P. Feynman. What do we need for DCM?. Data (possibly data features). 250. 0. input. depolarization. 1 st and 2d order moments. 200. -20.
E N D
The principles of DCM Vladimir Litvak Wellcome Trust Centre for Neuroimaging
Epigraph What I cannot create I do not understand. Richard P. Feynman
What do we need for DCM? • Data (possibly data features). 250 0 input depolarization 1st and 2d order moments 200 -20 150 -40 DCM for ERP (+second-order mean-field DCM) 100 -60 50 -80 0 -100 0 100 200 300 0 100 200 300 time (ms) time (ms) time (ms) auto-spectral density LA auto-spectral density CA1 cross-spectral density CA1-LA DCM for steady-state responses frequency (Hz) frequency (Hz) frequency (Hz) 0 -20 -40 DCM for induced responses -60 -80 DCM for phase coupling -100 0 100 200 300
What do we need for DCM? • 2. Generative model for the data (features). • Usually represented by a system of differential • equations • Can usually be divided into two parts • - Model of the underlying neural process • - Measurement-specific observation model(s). • Also includes specification of the input(s). • Prior values for model parameters (priors). • - Specification includes the most likely value and • precision. • - Priors are part of the model. Changing the • priors changes the model.
A really simple example Analytic solution Numerical solution x t 0
A really simple example numerical integration ‘Neural’ equation (exponential decay) Observation equation U x a t0
Even simpler numerical integration ‘Neural’ equation (exponential decay) Observation equation U x t0 - the only free parameter
What do we need for DCM? • 3. Optimization scheme for fitting the parameters to the data. • The objective function for optimization is the free energy which approximates the (log) model evidence: • There are many possible schemes based on different assumptions. Present DCM implementations in SPM use variational Bayesian scheme. • Once the scheme converges it yields • - The highest value of free energy the scheme could attain. • - Posterior distribution of the free parameters • -Simulated data as similar to the original data as the model could • generate.
Now lets take a step back What does this mean? • Given a prediction of the model, probability of the data given the model can be computed. • This requires some assumptions about the distribution of the error (e.g. IID, normal distribution). • Parameters of error distribution (e.g. variance) also need to be estimated.
Prior distribution of t0 What does this mean? Goodness of fit (likelihood) Prior probability
What does this mean? GOF = E(GOF) Goodness of fit Prior probability Model evidence = Goodness of fit expected under the priors
Now to some consequences – example 1 Model 1 Model 2 F = -534.51 F = -536.95
Now to some consequences – example 1 Model 1 Model 2 F = -534.51 F = -536.95
Now to some consequences – example 2 Model 1 Model 2 F = -534.51 F = -532.59
Now to some consequences – example 2 Model 1 Model 2 F = -534.51 F = -532.59
Now to some consequences – example 3 Model 1 Model 2 F = -554.42 F = -536.95
Now to some consequences – example 3 Model 1 Model 2 F = -554.42 F = -536.95
So that’s where this is coming from • The best model is the one with precise priors that yield good fit to the data.
One more thing… F = -597.6
FitzHugh-Nagumo model Richard FitzHugh with analogue computer at NIH, ca. 1960. (from Scholarpedia) • Parameters: • Input: • Input onset • Input amplitude • Neural model • a • b • c • d • tau – time constant • Observation model: • Output scaling from Gerstner and Kistler , 2002
Differences from the 1-parameter case • There is a hidden state variable w that does not affect the output directly but is estimated based on its interaction with the observable variable V. • In addition to prior variances, also prior co-variances between parameters must be specified and the model inversion yields posterior co-variances. • Parameters: • Input: • Input onset • Input amplitude • Neural model • a • b • c • d • tau – time constant Prior covariance Posterior correlation
FHN model – posterior variance • Parameters: • Input: • Input onset • Input amplitude • Neural model • a • b • c • d • tau – time constant
Effect of adding a junk parameter t1 t2 t1 t2
Effect of correlated parameters t1 Contrasts of parameters t2 t1 t2 t1 t2 t1+t2 t1-t2
DCM as integrative framework Data Prior Posterior
And finally – the real thing A. L. Hodgkin A. Huxley
And the winner is… • Exponential with 1-parameter • FitzHugh-Nagumo model • Hodgkin-Huxley model Very different models can be compared as long as the data is the same
However… The result of model comparison can be different for different data
Summary • The main principle of DCM is the use of data and generative models in a Bayesian framework to infer parameters and compare models. • Implementation details may vary. • Model inversion is an optimization procedure where the objective function is the free energy which approximates the model evidence. • Model evidence is the goodness of fit expected under the prior parameter values. • The best model is the one with precise priors that yield good fit to the data. • Parameters not affecting the model fit stay at their priors and do not affect the model evidence. • Very different models can be compared as long as they were fitted to the same data. • Models and priors can be gradually refined from one study to the next, making it possible to use DCM as an integrative framework in neuroscience.
Epilogue The first principle is that you must not fool yourself - and you are the easiest person to fool. So you have to be very careful about that. After you've not fooled yourself, it's easy not to fool other scientists. Richard P. Feynman from “Cargo cult science”, 1974