290 likes | 297 Views
Learn about Bayesian inference in Dynamic Causal Modeling (DCM), including choosing the best model and group analysis. Explore classical inference and Bayesian inference in DCM for testing null hypotheses and calculating posterior probabilities. Understand the Bayes rule and how to estimate the most probable underlying model based on observed data.
E N D
Bayseian inference • DCM examples • Choosing the best model • Group analysis
Bayseian Inference • Classical inference – tests null hypothesis Is the effect significantly different from zero? Or in spm terms, is any activation seen due effect of regressor rather than random noise! • Bayseian inference – probability that activation exceeds a set threshold given data Derived from posterior probability (calculated using Bayes) No false positives (no need for correction!)
Bayes rule • If A and B are 2 separate but possibly dependent random events, then: • Prob of A and B occurring together = P[(A,B)] • The conditional prob of A, given that B occurs = P[(A|B)] • The conditional prob of B, given that A occurs = P[(B|A)] P[(A,B)] = P[(A|B)] P[B] P[(B|A)] P[A] (1) • Dividing the right-hand pair of expressions by P[B] gives Bayes rule: P[(A|B)] = P[(B|A)] P[A] P[B] (2) • In probabilistic inference, we try to estimate the most probable underlying model for a random process, based on observed data. If A represents a given set of model parameters, and B represents the set of observed data values, then: • P[A] is the prior prob of the model A (in the absence of any evidence); • P[B] is the prob of the evidence B; • P[B|A] is the likelihood that the evidence B was produced, given that the model was A; • P[A|B] is the posterior prob of the model being A, given that the evidence is B. Posterior Probability αLikelihood x Prior Probability
Bayes rule 2 In DCM • Likelihood derived from error and confounds (eg. drift) • Priors – empirical (haemodynamic parameters) and non-empirical (eg. shrinkage priors, temporal scaling) • Posterior probability for each effect calculated and probability that it exceeds a set threshold expressed as a percentage
SPM{F} A2 A1 WA An example
Stimulus (perturbation), u1 Set (context), u2 A2 . A1 . WA
Stimulus (perturbation), u1 Set (context), u2 A2 . A1 . WA Full intrinsic connectivity: a
Stimulus (perturbation), u1 Set (context), u2 A2 . A1 . WA Full intrinsic connectivity: a u1 activates A1: c
Stimulus (perturbation), u1 Set (context), u2 A2 A1 . WA Full intrinsic connectivity: a u1 may modulate self connections induced connectivities: b1 u1 activates A1: c
Stimulus (perturbation), u1 Set (context), u2 A2 A1 . WA Full intrinsic connectivity: a u1 may modulate self connections induced connectivities: b1 u2 may modulate anything induced connectivities: b2 u1 activates A1: c
A2 -.62 (99%) .92 (100%) .37 (100%) A1 .47 (98%) .38 (94%) .37 (91%) WA -.51 (99%) u1 u2
u1 A2 .92 (100%) A1 .47 (98%) u2 .38 (94%) WA Intrinsic connectivity: a
u1 A2 .92 (100%) .37 (100%) A1 .47 (98%) u2 .38 (94%) WA Intrinsic connectivity: a Extrinsic influence: c
u1 A2 -.62(99%) .92 (100%) .37 (100%) A1 .47 (98%) u2 .38 (94%) WA -.51 (99%) Intrinsic connectivity: a Connectivity induced by u1: b1 Extrinsic influence: c
u1 saturation A2 -.62 (99%) .92 (100%) .37 (100%) A1 .47 (98%) u2 .38 (94%) WA -.51 (99%) Intrinsic connectivity: a Connectivity induced by u1: b1 Extrinsic influence: c
u1 saturation A2 -.62 (99%) .92 (100%) .37 (100%) A1 .47 (98%) u2 .38 (94%) .37 (91%) WA -.51 (99%) Intrinsic connectivity: a Connectivity induced by u1: b1 Connectivity induced by u2: b2 Extrinsic influence: c
u1 saturation A2 -.62 (99%) .92 (100%) .37 (100%) A1 .47 (98%) u2 .38 (94%) .37 (91%) adaptation WA -.51 (99%) Intrinsic connectivity: a Connectivity induced by u1: b1 Connectivity induced by u2: b2 Extrinsic influence: c
Another example Design: moving dots (u1), attention(u2)
Another example Design: moving dots (u1), attention(u2) SPM analysis: V1, V5, SPC, IFG
Another example Design: moving dots (u1), attention(u2) SPM analysis: V1, V5, SPC, IFG Literature: V5 motion-sensitive
Another example Design: moving dots (u1), attention(u2) SPM analysis: V1, V5, SPC, IFG Literature: V5 motion-sensitive Previous connect. analyses: SPC mod. V5, IFG mod. SPC
Another example • Design: moving dots (u1), attention(u2) • SPM analysis: V1, V5, SPC, IFG • Literature: V5 motion-sensitive • Previous connect. analyses: SPC mod. V5, IFG mod. SPC • Constraints: - intrinsic connectivity: V1 V5 SPC IFG - u1 V1 - u2: modulates V1 V5 SPC IFG - u3: motion modulates V1 V5 SPC IFG
Another example • Design: moving dots (u1), attention(u2) • SPM analysis: V1, V5, SPC, IFG • Literature: V5 motion-sensitive • Previous connect. analyses: SPC mod. V5, IFG mod. SPC • Constraints: - intrinsic connectivity: V1 V5 SPC IFG - u1 V1 - u2: modulates V1 V5 SPC IFG - u3: motion modulates V1 V5 SPC IFG (photic)
SPC V1 IFG V5 Another example Photic (u1) Attention (u2) .52 (98%) .37 (90%) .42 (100%) .82 (100%) .56 (99%) .47 (100%) .69 (100%) Motion (u3) .65 (100%)
SPC SPC V1 V1 V5 V5 Comparisonof models Model 1:attentional modulationof V1→V5 Model 2:attentional modulationof SPC→V5 Model 3:attentional modulationof V1→V5 and SPC→V5 Attention Attention Photic Photic Photic SPC 0.55 0.03 0.85 0.86 0.85 0.70 0.75 0.70 0.84 1.36 1.42 1.36 0.89 0.85 V1 -0.02 -0.02 -0.02 0.56 0.57 0.57 V5 Motion Motion Motion 0.23 0.23 Attention Attention Bayesian model selection: Model 1 better than model 2, model 1 and model 3 equal → Decision for model 1: in this instance, attention primarily modulates V1→V5
Comparisonof models • Bayseian inference again • Depends on goodness of fit and complexity of various models
Inference about DCM parameters:group analysis • In analogy to “random effects” analyses in SPM, 2nd level analyses can be applied to DCM parameters: Separate fitting of identical models for each subject Selection of bilinear parameters of interest one-sample t-test:parameter > 0 ? paired t-test:parameter 1 > parameter 2 ? rmANOVA:e.g. in case of multiple sessions per subject
"Laughing is a celebration of the good, and it's also how we deal with the bad. Laughing, like crying, is a good way of eliminating toxins from the body. Since the mind and body are connected, you use an amazing amount of muscles when you laugh." http://www.balloonhat.com/