160 likes | 175 Views
This study explores automated methods for measuring differences in brain structure in clinical studies, using deformation morphometry to create high-resolution maps of tissue volume changes. Statistical models and corrections for multiple comparisons are also discussed.
E N D
Improving the Efficiency of Statistical Map Creation and Assessment Valerie A. Cardenas Center for Imaging of Neurodegenerative Disease San Francisco Veterans Affairs Medical Center University of California, San Francisco
Challenge • Clinical studies aim to describe effect of disease/treatment on brain structure • Where to look for effects? • Anatomic variability • Manual methods: time consuming, rater error • Goal: automatically measure differences, look everywhere, account for anatomic variability
Deformation Morphometry • Automated • Suited for discerning patterns of structural change • Explore location and extent of variation • Use nonlinear registration or “warping” of images • Within: capture changes in brain over time • Between: measure deviation from atlas brain • Create high resolution maps of local tissue volume or tissue volume change • Model variability using many clinical variables
T(x1,y1,z1) V1 V2 Creating Deformation Maps Step 1: Nonlinear Registration Step 2: Determinant of Jacobian Matrix at each voxel, giving the pointwise volume change at each point Maps with 1-2 million voxels
y12 y22 yn2 xdisg2 tdiag2 y11 y21 yn1 xdiag1 tdiag1 y13 y23 yn3 xdiag3 tdiag3 y14 y24 yn4 xdiag4 tdiag4 Statistical Model Map 1;diagnosis 0 age 65 score 16 Map 2;diagnosis 1 age 68 score 8 Map n;diagnosis 1age 73 score 4 coefficient maps for each variable statistic maps for each variable
The Multiplicity Problem • Map formed of ~1-2 million statistics • Measurements of volume change and statistics are not independent, due to • initial image resolution • spatial transformation • smoothing • Bonferroni procedures too stringent
Corrections for Multiple Comparisons • Cluster analysis • Developed for PET and fMRI analyses • Stationarity/smoothness assumptions violated in deformation morphometry • Nonstationary methods valid for some problems • Permutation testing • Build a null distribution • Create statistic map using permuted labels 1000-10000 times • Need efficient computation here!! • Compare statistic to distribution to assess significance
y: n1 observations, subjects A: np independent variables Solution valid if ATA full-rank x: p1 regression coefficients e: n1 residuals Ordinary Least Squares
Computation • Compute (ATA)-1AT, solve for estimates x at each voxel • More efficient to use matrix decomposition • Cholesky decomposition: ATA=LLT • Lb(vi)=ATy(vi) • b(vi)=LTx(vi) • L lower triangular so easy to solve • L is computed from left to right and top to bottom!
Cholesky Decomposition: Advantage with A(vi) To calculate Lpj, need only last row of ATA and previously computed Lij. Most of L can be computed once, only update last row at each voxel.
i++ F i<slices j<subj T i=0 j=0 T Read image j++ F Compute x i++ F i<slices j<subj T i=0 j=0 T Read image j++ F Compute SSE, t and F statistics Limited RAM: Slice at a Time Assume 80 subjects, 100 slices, disk accessed 16000 times!
i<subj Read image T i=0 i++ F Compute x Compute SSE, t and F statistics 2+ Gb of RAM: Image in Memory Assume 80 subjects, 100 slices, disk accessed 80 times! 1000 permutations, many days -> 10 minutes! Within 2 Gb can run 100 subjects, 138x148x115 shorts
Voxel Estimates and Statisticsin Parallel • Dual- and quad-core processors common • Voxel estimates and statistics independent • Also possible to run in parallel i<num_vox/2 T Compute SSE, t and F statistics i++ CPU 1 Create statistic maps F F num_vox/2<I <num_vox T Compute SSE, t and F statistics i++ CPU 2
Permutation Testing in Parallel • Permutations independent • Possible to run in parallel Permute labels Compute permuted SSE, t and F statistics i++; add to distribution T i<500 CPU 1 F Compute p-values from distribution F Permute labels Compute permuted SSE, t and F statistics i++; add to distribution i<500 T CPU 2
Summary • Need fast computation for morphometry • Several easy improvements • Matrix decomposition • Images in memory • Estimates and statistics in parallel • Permutations in parallel • Any other suggestions?
Thanks to: • Colin Studholme • Mike Weiner • Clinical collaborators at CIND and UCSF