390 likes | 504 Views
Anatomical Image Analysis John Ashburner Functional Imaging Lab, 12 Queen Square, London, UK. A Growing Trend. Larger and more complex models are being produced to explain brain imaging data. Bigger and better computers allow more powerful models to be used
E N D
Anatomical Image AnalysisJohn AshburnerFunctional Imaging Lab, 12 Queen Square, London, UK.
A Growing Trend • Larger and more complex models are being produced to explain brain imaging data. • Bigger and better computers • allow more powerful models to be used • More experience among software developers • Older and wiser • More engineers - rather than e.g. psychiatrists & biochemists • This presentation is about combining various preprocessing procedures for anatomical images into a single generative model.
Traditional View of Pre-processing • Brain image processing is often thought of as a pipeline procedure. • One tool applied before another etc... • For example… Original Image Skull Strip Non-uniformity Correct Extract Brain Surfaces Classify Brain Tissues
Bias Correction helps Registration • MRI images are corrupted by a smooth intensity non-uniformity (bias). • Image intensity non-uniformity artefact has a negative impact on most registration approaches. • Much better if this artefact is corrected. Corrected image Image with bias artefact
Bias Correction helps Segmentation • Similar tissues no longer have similar intensities. • Artefact should be corrected to enable intensity-based tissue classification.
Registration helps Segmentation • SPM99 and SPM2 require tissue probability maps to be overlaid prior to segmentation.
Segmentation helps Bias Correction • Bias correction should not eliminate differences between tissue classes. • Can be done by • make all white matter about the same intensity • make all grey matter about the same intensity • etc • Currently fairly standard practice to combine bias correction and tissue classification
Segmentation helps Registration A convoluted method using SPM2 Spatially Normalised MRI Template Original MRI Affine register Spatial Normalisation - writing Affine Transform Grey Matter Segment Spatial Normalisation - estimation Deformation Tissue probability maps
Unified Segmentation • The solution to this circularity is to put everything in the same generativemodel. • A solution is found by repeatedly alternating among classification, bias correction and registration steps. • The generative model involves: • Mixture of Gaussians (MOG) • Bias Correction Component • Warping (Non-linear Registration) Component
Gaussian Probability Density • If intensities are assumed to be Gaussian of mean mk and variance s2k, then the probability of a value yi is:
Non-Gaussian Probability Distribution • A non-Gaussian probability density function can be modelled by a Mixture of Gaussians (MOG): Mixing proportion - positive and sums to one
Belonging Probabilities Belonging probabilities are assigned by normalising to one.
Mixing Proportions • The mixing proportion gk represents the prior probability of a voxel being drawn from class k - irrespective of its intensity. • So:
Non-Gaussian Intensity Distributions • Multiple Gaussians per tissue class allow non-Gaussian intensity distributions to be modelled.
Probability of Whole Dataset • If the voxels are assumed to be independent, then the probability of the whole image is the product of the probabilities of each voxel: • It is often easier to work with negative log-probabilities:
Modelling a Bias Field • A bias field is included, such that the required scaling at voxel i, parameterised by b, is ri(b). • Replace the means by mk/ri(b) • Replace the variances by (sk/ri(b))2
Modelling a Bias Field • After rearranging: y yr(b) r(b)
Tissue Probability Maps • Tissue probability maps (TPMs) are used instead of the proportion of voxels in each Gaussian as the prior. ICBM Tissue Probabilistic Atlases. These tissue probability maps are kindly provided by the International Consortium for Brain Mapping, John C. Mazziotta and Arthur W. Toga.
“Mixing Proportions” • Tissue probability maps for each class are included. • The probability of obtaining class k at voxel i, given weights g is then:
Deforming the Tissue Probability Maps • Tissue probability images are deformed according to parameters a. • The probability of obtaining class k at voxel i, given weights g and parameters a is then:
The Extended Model • By combining the modified P(ci=k|q) and P(yi|ci=k,q), the overall objective function (E) becomes: The Objective Function
Optimisation • The “best” parameters are those that minimise this objective function. • Optimisation involves finding them. • Begin with starting estimates, and repeatedly change them so that the objective function decreases each time.
Steepest Descent Start Optimum Alternate between optimising different groups of parameters
Schematic of optimisation Repeat until convergence... Hold g, m, s2 and a constant, and minimise E w.r.t. b - Levenberg-Marquardt strategy, using dE/db and d2E/db2 Hold g, m, s2 and b constant, and minimise E w.r.t. a - Levenberg-Marquardt strategy, using dE/da and d2E/da2 Hold a and b constant, and minimise E w.r.t. g, m and s2 -Use an Expectation Maximisation (EM) strategy. end
Levenberg-Marquardt Optimisation • LM optimisation is used for nonlinear registration (a) and bias correction (b). • Requires first and second derivatives of the objective function (E). • Parameters a and b are updated by • Increase l to improve stability (at expense of decreasing speed of convergence).
Expectation Maximisation is used to update m, s2 and g • For iteration (n), alternate between: • E-step: Estimate belonging probabilities by: • M-step: Set q(n+1) to values that reduce:
Linear Regularisation • Some bias fields and distortions are more probable (a priori) than others. • Encoded using Bayes rule: • Prior probability distributions can be modelled by a multivariate normal distribution. • Mean vector ma andmb • Covariance matrix Sa andSb • -log[P(a)] = (a-ma)TSa-1(a-ma) + const
Initial Affine Registration The procedure begins with a Mutual Information affine registration of the image with the tissue probability maps. MI is computed from a 4x256 joint probability histogram. See D'Agostino, Maes, Vandermeulen & P. Suetens. “Non-rigid Atlas-to-Image Registration by Minimization of Class-Conditional Image Entropy”. Proc. MICCAI 2004. LNCS 3216, 2004. Pages 745-753. Joint Probability Histogram Background voxels excluded
Background Voxels are Excluded An intensity threshold is found by fitting image intensities to a mixture of two Gaussians. This threshold is used to exclude most of the voxels containing only air.
Spatially normalised BrainWeb phantoms (T1, T2 and PD) Tissue probability maps of GM and WM Cocosco, Kollokian, Kwan & Evans. “BrainWeb: Online Interface to a 3D MRI Simulated Brain Database”. NeuroImage 5(4):S425 (1997)
Further Reading • Ashburner & Friston. “Unified Segmentation”. • To appear in NeuroImage. • SPM Web Pages • Look out for SPM5 • http://www.fil.ion.ucl.ac.uk/spm/ • Koen Van Leemput’s page • contains some nice slides on tissue classification • http://users.tkk.fi/~vanleemp/
A View of Science • Science is about building models that can make predictions about the world. • If it’s not predictive, then it’s not science. • Biological sciences are messy and kind of fuzzy. • Need to work probabilistically. • The only consistent system for working with probabilities is Bayesian. • “Dutch Book” arguments.
Bayes Rule • y - the data • q - a theory, model, or set of parameters • P(q|y) - probability of q given y(posterior probability) • P(y|q) - probability of y given q (likelihood) • P(q) - probability of q (prior probability) • P(y) - probability of y (evidence) • P(q,y) - probability of q and y(joint probability)