430 likes | 589 Views
^. ^. Space of X. C2 ^. C3. Xb. Xb. Xb. ). (. ^. L C2 ^. C 1Y /S 1 2. P C1C3 ^ Xb. b. = (X’X) - X’Y =. C1. X =. C2. C1. C2. C3. C1. C 2Y /S 2 2. C2. C3 ^. Space X. L C1 ^. C1. Y. Xb. e. C2. C1 ^. L C2. C1. L C1. Space of X. C2. Xb. C2. P C1C2 ^ Xb. C1.
E N D
^ ^ Space of X C2^ C3 Xb Xb Xb ) ( ^ LC2^ C1Y/S12 PC1C3^Xb b = (X’X)-X’Y = C1 X = C2 C1 C2 C3 C1 C2Y/S22 C2 C3^ Space X LC1^ C1 Y Xb e C2 C1^ LC2 C1 LC1 Space of X C2 Xb C2 PC1C2^Xb C1 C1^ C3 C2^ PC1C3^Xb Space of X C3 Xb C3^ C1^ C1 C2 SPM 2002 A priori complicated … a posteriori very simple ! Jean-Baptiste Poline Orsay SHFJ-CEA
SPM course - 2002LINEARMODELS and CONTRASTS T and F tests : (orthogonal projections) Hammering a Linear Model The RFT Use for Normalisation Jean-Baptiste Poline Orsay SHFJ-CEA www.madic.org
Adjusted signal Your question: a contrast Statistical Map Uncorrected p-values Design matrix images Spatial filter • General Linear Model • Linear fit • statistical image realignement & coregistration Random Field Theory smoothing normalisation Anatomical Reference Corrected p-values
Make sure we know all about the estimation (fitting) part .... Plan • Make sure we understand the testing procedures : t and F tests • A bad model ... And a better one • Correlation in our model : do we mind ? • A (nearly) real example
One voxel = One test (t, F, ...) amplitude • General Linear Model • fitting • statistical image time Statistical image (SPM) Temporal series fMRI voxel time course
90 100 110 -10 0 10 90 100 110 -2 0 2 m a m = 100 a = 1 Fit the GLM Mean value voxel time series box-car reference function Regression example… + + =
90 100 110 90 100 110 -2 0 2 -2 0 2 m a m = 100 a = 5 Mean value error voxel time series box-car reference function Regression example… + + =
…revisited : matrix form = a m + + error Ys m 1 a f(ts) es = + + ´ ´
Box car regression: design matrix… data vector (voxel time series) parameters error vector design matrix a = ´ + m ´ Y = X b + e
Add more reference functions ... Discrete cosine transform basis functions
…design matrix = the betas (here : 1 to 9) parameters error vector design matrix data vector a m b3 b4 b5 b6 b7 b8 b9 = + ´ = + Y X b e
S = s2 the squared values of the residuals number of time points minus the number of estimated betas Fitting the model = finding some estimate of the betas= minimising the sum of square of the error S2 raw fMRI time series adjusted for low Hz effects fitted box-car fitted “high-pass filter” residuals
Summary ... • We put in our model regressors (or covariates) that represent how we think the signal is varying (of interest and of no interest alike) • Coefficients (=estimated parameters or betas) are found such that the sum of the weighted regressors is as close as possible to our data • As close as possible = such that the sum of square of the residuals is minimal • These estimated parameters (the “betas”) depend on the scaling of the regressors : keep in mind when comparing ! • The residuals, their sum of square, the tests (t,F), do not depend on the scaling of the regressors
Plan • Make sure we all know about the estimation (fitting) part .... • Make sure we understand t and F tests • A bad model ... And a better one • Correlation in our model : do we mind ? • A (nearly) real example
c’ = 1 0 0 0 0 0 0 0 contrast ofestimatedparameters c’b T = T = varianceestimate s2c’(X’X)+c SPM{t} T test - one dimensional contrasts - SPM{t} A contrast= a linear combination of parameters: c´´b box-car amplitude > 0 ? = b1 > 0 ? (b1 : estimation of b1) = b1b2b3b4b5.... 1xb1+ 0xb2+ 0xb3+ 0xb4+ 0xb5+ . . . > 0 ? = test H0 : c´´ b > 0 ?
^ b Estimation [Y, X] [b, s] Y = X b + ee ~ s2 N(0,I)(Y : at one position) b = (X’X)+ X’Y (b = estimation of b) -> beta??? images e = Y - Xb(e = estimation of e) s2 = (e’e/(n - p)) (s = estimation of s, n: temps, p: paramètres) -> 1 image ResMS Test [b, s2, c] [c’b, t] Var(c’b) = s2c’(X’X)+c (compute for each contrast c) t = c’b / sqrt(s2c’(X’X)+c) (c’b -> images spm_con??? compute the t images -> images spm_t??? ) under the null hypothesis H0 : t ~ Student( df ) df = n-p contrast ofestimatedparameters How is this computed ? (t-test) varianceestimate
additionalvarianceaccounted forby tested effects X0 F = errorvarianceestimate S02 > S2 F ~ (S02 - S2 ) /S2 Or this one ? F test (SPM{F}) : a reduced model or ... Tests multiple linear hypotheses : Does X1 model anything ? H0: True model is X0 X0 X1 S2 This model ?
c’ = +1 0 0 0 0 0 0 0 H0: True model is X0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 X0 X0 X1(b3-9) c’ = SPM{F} This model ? Or this one ? F test (SPM{F}) : a reduced model or ...multi-dimensional contrasts ? tests multiple linear hypotheses. Ex : does HPF model anything? test H0 : c´´ b = 0 ? H0: b3-9 = (0 0 0 0 ...)
^ s2 Estimation [Y, X] [b, s] Y=X b + ee ~ N(0, s2 I) Y=X0b0+ e0e0 ~ N(0, s02 I) X0 : X Reduced Estimation [Y, X0] [b0, s0] (not really like that ) b0 = (X0’X0)+ X0’Y e0 = Y - X0 b0(eà = estimation of eà) s20 = (e0’e0/(n - p0)) (sà = estimation of sà, n: time, pà: parameters) Test [b, s, c] [ess, F] F = (e0’e0 - e’e)/(p - p0) / s2 -> image (e0’e0 - e’e)/(p - p0) : spm_ess??? -> image of F : spm_F??? under the null hypothesis : F ~ F(df1,df2) p - p0 n-p additionalvariance accounted forby tested effects How is this computed ? (F-test) Error varianceestimate
Plan • Make sure we all know about the estimation (fitting) part .... • Make sure we understand t and F tests • A bad model ... And a better one • Correlation in our model : do we mind ? • A (nearly) real example
True signal and observed signal (---) Model (green, pic at 6sec) et TRUE signal (blue, pic at 3sec) Fitting (b1 = 0.2, mean = .11) Noise (still contains some signal) => Test for the green regressor non significant A bad model ...
Residual Variance = 0.3 P( b1 > 0 ) = 0.1 (t-test) P( b1 = 0 ) = 0.2 (F-test) A bad model ... b1= 0.22 b2= 0.11 = + Y X b e
True signal + observed signal Model (green and red) and true signal (blue ---) Red regressor : temporal derivative of the green regressor Global fit (blue) and partial fit (green & red) Adjusted and fitted signal Noise (a smaller variance) => Test of the green regressor significant => Test F very significant => Test of the red regressor very significant A « better » model ...
Residual Var = 0.2 P( b1 > 0 ) = 0.07 (t test) P( [b1 b2] [0 0] ) = 0.000001 (F test) = A better model ... b1= 0.22 b2= 2.15 b3= 0.11 = + Y X b e
Summary ... (2) • The residuals should be looked at ...(non random structure ?) • We rather test flexible models if there is little a priori information, and precise ones with a lot a priori information • In general, use the F-tests to look for an overall effect, then look at the betas or the adjusted signal to characterise the origin of the signal • Interpreting the test on a single parameter (one function) can be very confusing: cf the delay or magnitude situation
Plan • Make sure we all know about the estimation (fitting) part .... • Make sure we understand t and F tests • A bad model ... And a better one • Correlation in our model : do we mind ? • A (nearly) real example ?
Model (green and red) Fitting (blue : global fit) Noise Correlation between regressors True signal
Residual var. = 0.3 P( b1 > 0 ) = 0.08 (t test) P( b2 > 0 ) = 0.07 (t test) P( [b1 b2] 0 ) = 0.002 (F test) = Correlation between regressors b1= 0.79 b2= 0.85 b3 = 0.06 = + Y X b e
Model (green and red) red regressor has been orthogonolised with respect to the green one = remove every thing that can correlate with the green regressor Fit Noise Correlation between regressors - 2 true signal
Correlation between regressors -2 0.79 0.85 0.06 b1= 1.47 b2= 0.85 b3 = 0.06 Residual var. = 0.3 P( b1 > 0 ) = 0.0003 (t test) P( b2 > 0 ) = 0.07 (t test) P( [b1 b2] <> 0 ) = 0.002 (F test) = + Y X b e See « explore design »
1 2 1 2 1 1 2 2 1 1 2 2 Design orthogonality : « explore design » Black = completely correlated White = completely orthogonal Corr(1,1) Corr(1,2) Beware : when there is more than 2 regressors (C1,C2,C3...), you may think that there is little correlation (light grey) between them, but C1 + C2 + C3 may be correlated with C4 + C5
Xb Y C2 C1 e Xb Space of X C2 C1 C2 Xb C2^ LC1^ C1 LC2 LC2 : test of C2 in the implicit ^ model LC1^ : test of C1 in the explicit ^ model ^ Implicit or explicit (^)decorrelation (or orthogonalisation) This GENERALISES when testing several regressors (F tests) See Andrade et al., NeuroImage, 1999
^ b ? 1 0 1 0 1 1 1 0 1 0 1 1 X = Y = Xb + e Mean Cond 1 Cond 2 “completely” correlated ... Mean = C1+C2 C2 C1 Parameters are not unique in general ! Some contrasts have no meaning: NON ESTIMABLE Example here : c’ = [1 0 0] is not estimable ( = no specific information in the first regressor); c’ = [1 -1 0] is estimable;
Summary ... (3) • We are implicitly testing additional effect only, so we may miss the signal if there is some correlation in the model using t tests • Orthogonalisation is not generally needed - parameters and test on the changed regressor don’t change • It is always simpler (when possible !) to have orthogonal (uncorrelated) regressors • In case of correlation, use F-tests to see the overall significance. There is generally no way to decide where the « common » part shared by two regressors should be attributed to • In case of correlation and you need to orthogonolise a part of the design matrix, there is no need to re-fit a new model : the contrast only should change.
Plan • Make sure we all know about the estimation (fitting) part .... • Make sure we understand t and F tests • A bad model ... And a better one • Correlation in our model : do we mind ? • A (nearly) real example
C1 V A C1 C2 C3 V C2 C3 C1 C2 A C3 A real example (almost !) Experimental Design Design Matrix Factorial design with 2 factors : modality and category 2 levels for modality (eg Visual/Auditory) 3 levels for category (eg 3 categories of words)
Asking ouselves some questions ... V A C1 C2 C3 2 ways : 1- write a contrast c and test c’b = 0 2- select colons of X for the model under the null hypothesis. The contrast manager allows that Test C1 > C2 : c = [ 0 0 1 -1 0 0 ] Test V > A : c = [ 1 -1 0 0 0 0 ] Test the modality factor : c = ? Use 2- Test the category factor : c = ? Use 2- Test the interaction MxC ? • Design Matrix not orthogonal • Many contrasts are non estimable • Interactions MxC are not modeled
Asking ouselves some questions ... C1 C1 C2 C2 C3 C3 Test C1 > C2 : c = [ 1 1 -1 -1 0 0 0] Test V > A : c = [ 1 -1 1 -1 1 -1 0] Test the differences between categories : [ 1 1 -1 -1 0 0 0] c = [ 0 0 1 1 -1 -1 0] V A V A V A Test everything in the category factor , leaves out modality : [ 1 1 0 0 0 0 0] c = [ 0 0 1 1 0 0 0] [ 0 0 0 0 1 1 0] Test the interaction MxC ? : [ 1 -1 -1 1 0 0 0] c = [ 0 0 1 -1 -1 1 0] [ 1 -1 0 0 -1 1 0] • Design Matrix orthogonal • All contrasts are estimable • Interactions MxC modelled • If no interaction ... ? Model too “big”
Asking ouselves some questions ... With a more flexible model C1 C1 C2 C2 C3 C3 V A V A V A Test C1 > C2 ? Test C1 different from C2 ? from c = [ 1 1 -1 -1 0 0 0] to c = [ 1 0 1 0 -1 0 -1 0 0 0 0 0 0] [ 0 1 0 1 0 -1 0 -1 0 0 0 0 0] becomes an F test! Test V > A ? c = [ 1 0 -1 0 1 0 -1 0 1 0 -1 0 0] is possible, but is OK only if the regressors coding for the delay are all equal
We don’t use that one SPM course - 2002LINEARMODELS and CONTRASTS T and F tests : (orthogonal projections) Hammering a Linear Model The RFT Use for Normalisation Jean-Baptiste Poline Orsay SHFJ-CEA www.madic.org