1 / 16

Improving the Efficiency of Statistical Map Creation and Assessment

This study explores automated methods for measuring differences in brain structure in clinical studies, using deformation morphometry to create high-resolution maps of tissue volume changes. Statistical models and corrections for multiple comparisons are also discussed.

keesee
Download Presentation

Improving the Efficiency of Statistical Map Creation and Assessment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Improving the Efficiency of Statistical Map Creation and Assessment Valerie A. Cardenas Center for Imaging of Neurodegenerative Disease San Francisco Veterans Affairs Medical Center University of California, San Francisco

  2. Challenge • Clinical studies aim to describe effect of disease/treatment on brain structure • Where to look for effects? • Anatomic variability • Manual methods: time consuming, rater error • Goal: automatically measure differences, look everywhere, account for anatomic variability

  3. Deformation Morphometry • Automated • Suited for discerning patterns of structural change • Explore location and extent of variation • Use nonlinear registration or “warping” of images • Within: capture changes in brain over time • Between: measure deviation from atlas brain • Create high resolution maps of local tissue volume or tissue volume change • Model variability using many clinical variables

  4. T(x1,y1,z1) V1 V2 Creating Deformation Maps Step 1: Nonlinear Registration Step 2: Determinant of Jacobian Matrix at each voxel, giving the pointwise volume change at each point Maps with 1-2 million voxels

  5. y12 y22 yn2 xdisg2 tdiag2 y11 y21 yn1 xdiag1 tdiag1 y13 y23 yn3 xdiag3 tdiag3 y14 y24 yn4 xdiag4 tdiag4 Statistical Model Map 1;diagnosis 0 age 65 score 16 Map 2;diagnosis 1 age 68 score 8 Map n;diagnosis 1age 73 score 4 coefficient maps for each variable statistic maps for each variable

  6. The Multiplicity Problem • Map formed of ~1-2 million statistics • Measurements of volume change and statistics are not independent, due to • initial image resolution • spatial transformation • smoothing • Bonferroni procedures too stringent

  7. Corrections for Multiple Comparisons • Cluster analysis • Developed for PET and fMRI analyses • Stationarity/smoothness assumptions violated in deformation morphometry • Nonstationary methods valid for some problems • Permutation testing • Build a null distribution • Create statistic map using permuted labels 1000-10000 times • Need efficient computation here!! • Compare statistic to distribution to assess significance

  8. y: n1 observations, subjects A: np independent variables Solution valid if ATA full-rank x: p1 regression coefficients e: n1 residuals Ordinary Least Squares

  9. Computation • Compute (ATA)-1AT, solve for estimates x at each voxel • More efficient to use matrix decomposition • Cholesky decomposition: ATA=LLT • Lb(vi)=ATy(vi) • b(vi)=LTx(vi) • L lower triangular so easy to solve • L is computed from left to right and top to bottom!

  10. Cholesky Decomposition: Advantage with A(vi) To calculate Lpj, need only last row of ATA and previously computed Lij. Most of L can be computed once, only update last row at each voxel.

  11. i++ F i<slices j<subj T i=0 j=0 T Read image j++ F Compute x i++ F i<slices j<subj T i=0 j=0 T Read image j++ F Compute SSE, t and F statistics Limited RAM: Slice at a Time Assume 80 subjects, 100 slices, disk accessed 16000 times!

  12. i<subj Read image T i=0 i++ F Compute x Compute SSE, t and F statistics 2+ Gb of RAM: Image in Memory Assume 80 subjects, 100 slices, disk accessed 80 times! 1000 permutations, many days -> 10 minutes! Within 2 Gb can run 100 subjects, 138x148x115 shorts

  13. Voxel Estimates and Statisticsin Parallel • Dual- and quad-core processors common • Voxel estimates and statistics independent • Also possible to run in parallel i<num_vox/2 T Compute SSE, t and F statistics i++ CPU 1 Create statistic maps F F num_vox/2<I <num_vox T Compute SSE, t and F statistics i++ CPU 2

  14. Permutation Testing in Parallel • Permutations independent • Possible to run in parallel Permute labels Compute permuted SSE, t and F statistics i++; add to distribution T i<500 CPU 1 F Compute p-values from distribution F Permute labels Compute permuted SSE, t and F statistics i++; add to distribution i<500 T CPU 2

  15. Summary • Need fast computation for morphometry • Several easy improvements • Matrix decomposition • Images in memory • Estimates and statistics in parallel • Permutations in parallel • Any other suggestions?

  16. Thanks to: • Colin Studholme • Mike Weiner • Clinical collaborators at CIND and UCSF

More Related