500 likes | 534 Views
Bayesian Inference. Lee Harrison York Neuroimaging Centre 14 / 05 / 2010. Posterior probability maps (PPMs). Spatial priors on activation extent. Bayesian segmentation and normalisation. Dynamic Causal Modelling. realignment. smoothing. general linear model. Gaussian field theory.
E N D
Bayesian Inference Lee Harrison York Neuroimaging Centre 14 / 05 / 2010
Posterior probability maps (PPMs) Spatial priors on activation extent Bayesian segmentation and normalisation Dynamic Causal Modelling realignment smoothing general linear model Gaussian field theory statistical inference normalisation p <0.05 template
Attention to Motion Paradigm Results SPC V3A V5+ Attention – No attention Büchel & Friston 1997, Cereb. Cortex Büchel et al.1998, Brain • - fixation only • observe static dots + photic V1 • - observe moving dots + motion V5 • task on moving dots + attention V5 + parietal cortex
SPC V1 V5 Attention Photic Photic SPC V1 V5 Motion Motion Attention Attention to Motion Paradigm Dynamic Causal Models Model 1 (forward):attentional modulationof V1→V5: forward Model 2 (backward):attentional modulationof SPC→V5: backward • - fixation only • observe static dots • - observe moving dots • task on moving dots Bayesian model selection: Which model is optimal?
Responses to Uncertainty Long term memory Short term memory
1 2 3 4 … 1 2 40 trials Responses to Uncertainty Paradigm Stimuli sequence of randomly sampled discrete events Model simple computational model of an observers response to uncertainty based on the number of past events (extent of memory) Question which regions are best explained by short / long term memory model? ? ?
Overview • Introductory remarks • Some probability densities/distributions • Probabilistic (generative) models • Bayesian inference • A simple example – Bayesian linear regression • SPM applications • Segmentation • Dynamic causal modeling • Spatial models of fMRI time series
k=2 Probability distributions and densities
k=2 Probability distributions and densities
k=2 Probability distributions and densities
k=2 Probability distributions and densities
k=2 Probability distributions and densities
k=2 Probability distributions and densities
k=2 Probability distributions and densities
estimation space space time Generative models generation
Bayesian statistics new data prior knowledge posterior likelihood ∙ prior The “posterior” probability of the parameters given the data is an optimal combination of prior knowledge and new data, weighted by their relative precision. Bayes theorem allows one to formally incorporate prior knowledge into computing statistical probabilities.
Bayes’ rule Given data y and parameters , their joint probability can be written in 2 ways: Eliminating p(y,) gives Bayes’ rule: Likelihood Prior Posterior Evidence
Principles of Bayesian inference • Formulation of a generative model likelihoodp(y|) prior distribution p() • Observation of data y • Update of beliefs based upon observations, given a prior state of knowledge
Univariate Gaussian Normal densities Posterior mean = precision-weighted combination of prior mean and data mean
Bayesian GLM: univariate case Normal densities
β2 β1 Bayesian GLM: multivariate case Normal densities One step if Ce and Cp are known. Otherwise iterative estimation.
True posterior mean-field approximation iteratively improve Approximate posterior free energy Objective function Value of parameter Approximate inference: optimization
Ordinary least squares Simple example – linear regression Data
Ordinary least squares Bases (explanatory variables) Simple example – linear regression Data and model fit Bases (explanatory variables) Sum of squared errors
Ordinary least squares Simple example – linear regression Data and model fit Bases (explanatory variables) Sum of squared errors
Ordinary least squares Simple example – linear regression Data and model fit Bases (explanatory variables) Sum of squared errors
Simple example – linear regression Data and model fit Ordinary least squares Over-fitting: model fits noise Inadequate cost function: blind to overly complex models Solution: include uncertainty in model parameters Bases (explanatory variables) Sum of squared errors
Bayesian linear regression:priors and likelihood Model: Prior:
Bayesian linear regression:priors and likelihood Model: Prior: Sample curves from prior (before observing any data) Mean curve
Bayesian linear regression:priors and likelihood Model: Prior: Likelihood:
Bayesian linear regression:priors and likelihood Model: Prior: Likelihood:
Bayesian linear regression: priors and likelihood Model: Prior: Likelihood:
Bayesian linear regression: posterior Model: Prior: Likelihood: Bayes Rule:
Bayesian linear regression: posterior Model: Prior: Likelihood: Bayes Rule: Posterior:
Bayesian linear regression: posterior Model: Prior: Likelihood: Bayes Rule: Posterior:
Bayesian linear regression: posterior Model: Prior: Likelihood: Bayes Rule: Posterior:
Posterior Probability Maps (PPMs) Posterior distribution:probability of the effect given the data mean: size of effectprecision: variability Posterior probability map: images of the probability (confidence) that an activation exceeds some specified thresholdsth, given the data y • Two thresholds: • activation threshold sth : percentage of whole brain mean signal (physiologically relevant size of effect) • probability pth that voxels must exceed to be displayed (e.g. 95%)
Bayesian linear regression: model selection Bayes Rule: normalizing constant Model evidence:
PPM of belonging to… grey matter white matter CSF aMRI segmentation
Dynamic Causal Modelling:generative model for fMRI and ERPs Hemodynamicforward model:neural activityBOLD Electric/magnetic forward model:neural activityEEGMEG LFP Neural state equation: fMRI ERPs Neural model: 1 state variable per region bilinear state equation no propagation delays Neural model: 8 state variables per region nonlinear state equation propagation delays inputs
attention estimated effective synaptic strengths for best model (m4) 15 0.10 PPC 0.39 0.26 10 1.25 0.26 stim V1 V5 0.13 5 models marginal likelihood 0.46 0 m1 m2 m3 m4 Bayesian Model Selection for fMRI m1 m2 m3 m4 attention attention attention attention PPC PPC PPC PPC stim V1 stim V1 stim V1 stim V1 V5 V5 V5 V5 [Stephan et al., Neuroimage, 2008]
degree of smoothness Spatial precision matrix Smooth Y(RFT) prior precision of GLM coeff prior precision of AR coeff aMRI prior precision of data noise GLM coeff AR coeff (correlated noise) ML estimate of β VB estimate of β observations Penny et al 2005 fMRI time series analysis with spatial priors
Display only voxels that exceed e.g. 95% activation threshold Probability mass pn PPM (spmP_*.img) Posterior density q(βn) probability of getting an effect, given the data mean: size of effectcovariance: uncertainty fMRI time series analysis with spatial priors:posterior probability maps Mean (Cbeta_*.img) Std dev (SDbeta_*.img)
Log-evidence maps subject 1 model 1 subject N model K Compute log-evidence for each model/subject fMRI time series analysis with spatial priors:Bayesian model selection
Log-evidence maps BMS maps subject 1 model 1 subject N PPM model K EPM Probability that model k generated data model k Compute log-evidence for each model/subject fMRI time series analysis with spatial priors:Bayesian model selection Joao et al, 2009
Reminder… Long term memory Short term memory
Compare two models long-term memory model Short-term memory model IT indices: H,h,I,i IT indices are smoother Missed trials onsets H=entropy; h=surprise; I=mutual information; i=mutual surprise
Group data: Bayesian Model Selection maps Regions best explained by short-term memory model Regions best explained by long-term memory model frontal cortex (executive control) primary visual cortex