280 likes | 410 Views
Hierarchical statistical analysis of fMRI data across runs/sessions/subjects/studies using BRAINSTAT/FMRISTAT. Jonathan Taylor, Stanford Keith Worsley, McGill. What is BRAINSTAT / FMRISTAT ?. FMRISTAT is a Matlab fMRI stats analysis package BRAINSTAT is a Python version Main components:
E N D
Hierarchical statistical analysis of fMRI data across runs/sessions/subjects/studiesusing BRAINSTAT/FMRISTAT Jonathan Taylor, Stanford Keith Worsley, McGill
What is BRAINSTAT / FMRISTAT ? • FMRISTAT is a Matlab fMRI stats analysis package • BRAINSTAT is a Python version • Main components: • FMRILM: Linear model, AR(p) errors, bias correction, smoothing of autocorrelation to boost degrees of freedom* • MULTISTAT: Mixed effects linear model, ReML estimation, EM algorithm, smoothing of random/fixed effects sd to boost degrees of freedom* • Key idea: IN: effect, sd, df, fwhm, OUT: effect, sd, df, fwhm • STAT_SUMMARY: best of Bonferroni, non-isotropic random field theory, DLM (Discrete Local Maxima)* *new theoretical results • Treats magnitudes and delays in the same way
FMRILM: smoothing of temporal autocorrelation • Variability in acor lowers df • Df depends • on contrast • Smoothing acor brings df back up: dfacor = dfresidual(2 + 1) 1 1 2 acor(contrast of data)2 dfeffdfresidualdfacor FWHMacor2 3/2 FWHMdata2 = + Hot-warm stimulus Hot stimulus FWHMdata = 8.79 Residual df = 110 Residual df = 110 100 100 Target = 100 df Target = 100 df Contrast of data, acor = 0.79 50 Contrast of data, acor = 0.61 50 dfeff dfeff 0 0 0 10 20 30 0 10 20 30 FWHM = 10.3mm FWHM = 12.4mm FWHMacor FWHMacor
400 300 200 100 0 0 20 40 Infinity MULTISTAT: smoothing of random/fixed FX sd FWHMratio2 3/2 FWHMdata2 e.g. dfrandom = 3, dffixed = 4 110 = 440, FWHMdata = 8mm: dfratio = dfrandom(2 + 1) 1 1 1 dfeffdfratiodffixed = + fixed effects analysis, dfeff = 440 dfeff FWHM = 19mm Target = 100 df random effects analysis, dfeff = 3 FWHMratio
STAT_SUMMARY High FWHM: use Random Field Theory Low FWHM: use Bonferroni In between: use Discrete Local Maxima (DLM) 0.12 Gaussian T, 20 df T, 10 df 0.1 Random Field Theory Bonferroni 0.08 DLM can ½ P-value when FWHM ~3 voxels 0.06 P-value 0.04 Discrete Local Maxima True 0.02 Bonferroni, N=Resels 0 0 1 2 3 4 5 6 7 8 9 10 FWHM of smoothing kernel (voxels)
STAT_SUMMARY High FWHM: use Random Field Theory Low FWHM: use Bonferroni In between: use Discrete Local Maxima (DLM) Bonferroni 4.7 4.6 4.5 True T, 10 df 4.4 Random Field Theory 4.3 T, 20 df Discrete Local Maxima (DLM) 4.2 Gaussianized threshold 4.1 Gaussian 4 3.9 Bonferroni, N=Resels 3.8 3.7 0 1 2 3 4 5 6 7 8 9 10 FWHM of smoothing kernel (voxels)
STAT_SUMMARY example: single run, hot-warm Detected by BON and DLM but not by RFT Detected by DLM, but not by BON or RFT
0.6 0.4 0.2 0 -0.2 -0.4 -5 0 5 10 15 20 25 t (seconds) Estimating the delay of the response • Delay or latency to the peak of the HRF is approximated by a linear combination of two optimally chosen basis functions: delay basis1 basis2 HRF shift • HRF(t + shift) ~ basis1(t)w1(shift) + basis2(t)w2(shift) • Convolve bases with the stimulus, then add to the linear model
Example: FIAC data • 16 subjects • 4 runs per subject • 2 runs: event design • 2 runs: block design • 4 conditions • Same sentence, same speaker • Same sentence, different speaker • Different sentence, same speaker • Different sentence, different speaker • 3T, 200 frames, TR=2.5s
Response • Events • Blocks Beginning of block/run
Design matrix for block expt • B1, B2 are basis functions for magnitude and delay:
1st level analysis • Motion and slice time correction (using FSL) • 5 conditions • Smoothing of temporal autocorrelation to control the effective df (new!)
Efficiency • Sd of contrasts (lower is better) for a single run, assuming additivity of responses • For the magnitudes, event and block have similar efficiency • For the delays, event is much better.
2nd level analysis • Analyse events and blocks separately • Register contrasts to Talairach (using FSL) • Bad registration on 2 subjects - dropped • Combine 2 runs using fixed FX • Combine remaining 14 subjects using random FX • 3 contrasts × event/block × magnitude/delay = 12 • Threshold using best of Bonferroni, random field theory, and discrete local maxima (new!) 3rd level analysis
Part of slice z = -2 mm
Event Block Magnitude Delay
Events vs blocks for delaysin different – same sentence • Events: 0.14±0.04s; Blocks: 1.19±0.23s • Both significant, P<0.05 (corrected) (!?!) • Answer: take a look at blocks: Greater magnitude Different sentence (sustained interest) Best fitting block Same sentence (lose interest) Greater delay
Magnitude increase for • Sentence, Event • Sentence, Block • Sentence, Combined • Speaker, Combined at (-54,-14,-2)
Magnitude decrease for • Sentence, Block • Sentence, Combined at (-54,-54,40)
Delay increase for • Sentence, Event at (58,-18,2) inside the region where all conditions are activated
Conclusions • Greater %BOLD response for • different – same sentences (1.08±0.16%) • different – same speaker (0.47±0.0.8%) • Greater latency for • different – same sentences (0.148±0.035 secs)
z=-12 z=2 z=5 1 2 3 1,4 3 3 3 3 1 The main effects of sentence repetition (in red) and of speaker repetition (in blue).1: Meriaux et al, Madic; 2: Goebel et al, Brain voyager; 3: Beckman et al, FSL; 4: Dehaene-Lambertz et al, SPM2. Brainstat: combined block and event, threshold at T>5.67, P<0.05.