Introduction

Several neuropathologies of the central nervous system such as multiple sclerosis (MS), schizophrenia, epilepsy, Alzheimer, and Creutzfeldt-Jakob disease (CJD) are related to morphological and/or functional changes in the brain. Studying such diseases by objectively measuring these changes instead of assessing the clinical symptoms is of great social and economical importance. These changes can be measured in three dimensions in a noninvasive way using current medical imaging modalities. Magnetic resonance imaging (MRI), in particular, is well suited for studying diseases of the nervous system due to its high spatial resolution and the inherent high soft tissue contrast.

Manual analysis of MRI by a trained human expert is a tedious and difficult task, because the structures of interest show complex edge configurations in 3D and may lack clearly visible anatomical borders. In clinical trials, the number of MR images is often so large that manual analysis by human experts is too time-consuming. Furthermore, it is not clear how a human rater combines information obtained from different channels when multispectral MR data are examined. Also, the intra- and interobserver variability associated with manual delineations complicates the analysis of the results. Therefore, there is a need for fully automated methods for MR brain image quantification that can analyze

1 Department of Radiology, Helsinki University Central Hospital, Finland

2 Medical Image Computing (Radiology-ESAT/PSI), Faculties of Medicine and Engineering, Katholieke Universiteit Leuven, Belgium large amounts of multispectral MR data in a reproducible way that correlates well with expert analyses.

A key component in image analysis and interpretation is image segmentation, defined as the delineation of anatomical structures and other regions of interest. In this chapter we will present a framework for the accurate segmentation of brain tissues (Gray matter, white matter, CSF) from multispectral MR images of the brain. We will also discuss how this framework can be used to quantify pathology-related abnormalities (mostly in intensity but also in morphology) within these tissues.

The overall strategy is to build statistical models for normal brain MR images, with emphasis on accurate intensity models. Signal abnormalities are detected as model outliers, i.e., voxels that cannot be well explained by the model. Special attention is paid to automatically estimate all model parameters from the data itself and, hence, to eliminate subjective manual tuning and training.

1.1.1 Segmentation Methodology

The simplest image segmentation methods, such as region growing and edge detection, rely entirely on local image operators and a heuristic grouping of neighboring pixels with similar local photometric characteristics. These approaches are simple to understand and to implement, and are very generic since they do not assume specific knowledge about the objects to be analyzed. However, these methods ultimately fail when either or both of the image data and the object model (shape, context) are complex, as in cross-sectional images of the brain. Indeed, the complex 3-D shape of the brain and its affected areas, and ambiguities in the images induced by the imaging process, such as limited resolution, partial volume effects, noise, low contrast, intensity inhomogeneities, and other artifacts, make brain tissue segmentation difficult, even for human experts.

In order to effectively deal with this complex morphology and MR imaging ambiguity, brain image segmentation methods must incorporate models that describe prior knowledge about the expected geometry and intensity characteristics of the anatomical objects of interest in the images.

• Intensity-based methods fit appropriate photometric models to the data.

In these approaches, the objects are simple voxels with an associated scalar, such as the gray value, or vector of characteristic features, such as (p, Ti, T2)-weighted intensity values in MRI. If the feature vectors are represented in a multidimensional feature space, the segmentation strategy then consists of partitioning this feature space in a number of nonoverlap-ping regions that separate different voxel types. Unclassified voxels then receive the label of their class in feature space. The boundaries between the regions in features space are obtained by optimizing any of a set of decision criteria, depending on the a priori assumptions about the feature space distributions. Parametric classification approaches assume that the distributions in feature space follow a certain parametric model. Typically, the multivariate Gaussian model is used. This model can be extended to explicitly include imaging artifacts such as the partial volume effect [1-3] and the intensity inhomogeneity present in MR images [4-6]. Parameter values for the distribution of each class can be learned from a representative set of samples in a supervised training phase, usually requiring cumbersome user interaction. Fully automated, unsupervised learning procedures estimate the distribution parameters from the image to be segmented itself.

• Geometry-based methods use prior knowledge about the overall shape of an object to separate it from its surroundings in the image. Typically, a surface deforms under the influence of external image derived forces (attracting the surface to object edges, etc.) and internal elasticity constraints (e.g. surface continuity and smoothness) [7]. An extensive survey of these methods in medical image analysis is given in [8]; recent examples include [9-12]. Within the context of brain image analysis, deformable brain atlas-guided approaches have been proposed. Here, prior knowledge about the image scene is represented iconically (i.e. an image-like representation). The image is segmented by matching it with the iconic atlas representation. The matching process must have sufficient degrees of freedom and robustness so as to cope with the biological variability and pathological abnormalities. However, even with nonlinear registration methods, accurate brain tissue segmentations are difficult due to anatomical variability of the cortical folding.

Intensity-based tissue classification and geometry-based atlas-driven methods are seemingly complementary segmentation strategies.

• The advantage of MR-intensity-based tissue classification is its ability to produce high-quality definitions of tissue boundaries. This is especially important for human brain tissue classification, where highly curved interfaces between tissues (such as between gray and white matter) are difficult to recover from finite resolution images. However, it is unsuccessful when different structures have overlapping feature distributions (e.g., brain tissue and extracranial tissue in Tl-weighted MRI).

• Geometry-based methods have been successfully applied to the localization of particular anatomical structures, where sufficient information on shape and context is available. These methods, however, often require accurate initialization and, more importantly, can fail in the case of highly variable structures such as the cerebral cortex and in the presence of abnormal anatomical structures.

Following Warfield et al. [13] and Collins et al. [14], who developed the idea that the inherent limitations of intensity-based classification can be alleviated by combining it with an elastically deformable atlas, we propose here to combine photometric and geometric models in a single framework. Tissues are classified using an unsupervised parametric approach by using a mixture of Gaussians as the feature space distribution model in an iterative expectation-maximization loop. Classifications are further initialized and constrained by iconic matching of the image to a digital atlas containing spatial maps of prior tissue probabilities.

Section 1.1.2 presents the standard intensity model and optimization approach that we will use throughout this chapter. Section 1.1.3 discusses the basic geometric model (a brain atlas) and its iconic matching to image data using linear and nonlinear registration algorithms. Section 1.1.4 summarizes the changes made to the basic intensity model in order to model the MR imaging process more faithfully and to make the segmentation procedure more robust in the presence of abnormalities.

1.1.2 Intensity Model and the

Expectation-Maximization (EM) Algorithm

The intensity model proposed here is the so-called mixture of normal distributions [15-17]. Let Y = [yj, j = 1, 2,..., J} be a C-channel MR image with a total of J voxels, where yj denotes the possibly multispectral intensity of voxel j. Suppose that there are K tissue types present in the imaged area, and let lj e {1, 2, ...,K} denote the tissue type to which voxel j belongs. In the mixture model, it is assumed that each tissue type k has a typical intensity fj,k in the image, with tissue-specific normally distributed intensity fluctuations in the voxels. In other words, the probability density that voxel j of tissue type lj has intensity yj is given by f (yj | lj ,$) = G ^ (yj - Vi,) (1.1)

Here G^ (■) denotes a zero-mean normal distribution with covariance £, and $ = {¡ik, k = 1, 2,...,K} represents the total set of model parameters.

For notational convenience, let all the voxel labels lj be grouped in a label image L = {lj, j = 1, 2,..., J}. It is assumed that the label lj of each voxel is drawn independently from the labels of the other voxels, with an a priori known probability nk, i.e.

The overall probability density for image Y given the model parameters $ is then given by f (Y | $) = £ f (Y | L, $)f (L)

= n f (yj |$) j with f (yj | $) = J2 f (yj l lJ = k,$) ■ nk (1.3)

Equation (1.3) is the well-known mixture model (see Fig. 1.1). It models the histogram of image intensities as a sum of normal distributions, each distribution weighted with its prior probability nk.

Image segmentation aims to reconstruct the underlying tissue labels L based on the image Y. If an estimation of the model parameters is somehow available, then each voxel can be assigned to the tissue type that best explains its intensity. Unfortunately, the result depends largely on the model parameters used. Typically, these are estimated by manually selecting representative points in the image of each of the classes considered. However, once all the voxels are classified, the model parameter estimation can in its turn automatically be improved based on all the voxels instead of on the subjectively selected ones alone.

Figure 1.1: The mixture model fitted to a Tl-weighted brain MR image: (a) the intracranial volume and (b) its intensity histogram with a mixture of normal distributions overlayed. The normal distributions correspond to white matter, gray matter, and CSF.

Intuitively, both the segmentation and the model parameters can be estimated in a more objective way by interleaving the segmentation with the estimation of the model parameters.

The expectation-maximization (EM) algorithm [18] formalizes this intuitive approach. It estimates the maximum likelihood (ML) parameters $

by iteratively filling in the unknown tissue labels L based on the current parameter estimation $, and recalculating $ that maximizes the likelihood of the so-called complete data {Y, L}. More specifically, the algorithm interleaves two steps:

Expectation step: find the function

Q($ | $(m-1)) = EL[log f (Y, L | $) | Y, $(m-1)] Maximization step: find

with m the iteration number. It has been shown that the likelihood log f (Y | $) is guaranteed to increase at each iteration for EM algorithms [19].

Figure 1.1: The mixture model fitted to a Tl-weighted brain MR image: (a) the intracranial volume and (b) its intensity histogram with a mixture of normal distributions overlayed. The normal distributions correspond to white matter, gray matter, and CSF.

With the image model described above, the expectation step results in a statistical classification of the image voxels

( n f (yi I li, $(m-1)) ■ n. f (l. I Y, $(m-1)) = _ J ' j----.--(1.4)

and the subsequent maximization step involves

J2jf (ij = k | y,¥m-V)) ■ yf-V) Y,jf (ij = k | Y,$(m-1))

Tj f (ij = k | Y, $(m-1)) ■ (yj-1) - »km)) ■ (y(m-1) - »rV j (ij = k | Y, $(m-1))

Thus, the algorithm iteratively improves the model parameters by interleaving two steps (see Fig. 1.2): classification of the voxels based on the estimation of the normal distributions (Eq. 1.4) and estimation of the normal distributions based on the classification (Eqs. 1.5 and 1.6). Upon convergence, Eq. (1.4) yields the final classification result.

Figure 1.2: Estimating the model parameters of the mixture model with an expectation-maximization algorithm results in an iterative two-step process that interleaves classification of the voxels with reestimation of the normal distributions.

update mixture model

Figure 1.2: Estimating the model parameters of the mixture model with an expectation-maximization algorithm results in an iterative two-step process that interleaves classification of the voxels with reestimation of the normal distributions.

1.1.3 Geometry Model and the MMI Algorithm

The iterative model parameter estimation algorithm described above needs to be initialized with a first estimate of the parameters. One possibility to obtain such an estimate is to have the user manually select voxels that are representative for each of the classes considered. However, to eliminate the variability induced by such a preceding training phase, we avoid manual intervention by the use of a digital brain atlas that contains spatially varying prior probability maps for the location of white matter, gray matter, and CSF (see Fig. 1.3). These probability maps were obtained by averaging binary white matter, gray matter, and CSF segmentations of MR brain images from a large number of subjects, after normalization of all images into the same space using an affine transformation [20]. To apply this a priori information, the atlas is first normalized to the space of the study image by matching the study image to a T1 template associated with the atlas (see Fig. 1.3) by using the affine multimodality registration technique based on maximization of mutual information (MMI) of Maes et al. [21].

Mutual information (MI) is a basic concept from information theory, which is applied in the context of image registration to measure the amount of information that one image contains about the other. The MMI registration criterion postulates that MI is maximal when the images are correctly aligned. Mutual information does not rely on the intensity values directly to measure correspondence between different images, but on their relative occurrence in each of the

Figure 1.3: Digital brain atlas with spatially varying a priori probability maps for (a) white matter, (b) gray matter, and (c) CSF. High intensities indicate high a priori probabilities. The atlas also contains a T1 template image (d), which is used for registration of the study images to the space of the atlas. (Source: Ref. [23].)

Figure 1.3: Digital brain atlas with spatially varying a priori probability maps for (a) white matter, (b) gray matter, and (c) CSF. High intensities indicate high a priori probabilities. The atlas also contains a T1 template image (d), which is used for registration of the study images to the space of the atlas. (Source: Ref. [23].)

images separately and co-occurrence in both images combined. Unlike other voxel-based registration criteria, based on for instance intensity differences or intensity correlation, the MI criterion does not make limiting assumptions about the nature of the relationship between the image intensities of corresponding voxels in the different modalities, which is highly data-dependent, and does not impose constraints on the image content of the modalities involved. This explains the success of MMI for multimodal image registration in a wide range of applications involving various modality combinations [22]. It has furthermore been shown [21] that this registration criterion is fairly insensitive to moderate bias fields, such that it can be applied fully automatically and reliably to the MR images with a bias field inhomogeneity (see section 1.2). The properly registered and reformatted a priori tissue probability maps of the atlas provide an initial estimate of the classification from which initial values for the class-specific distribution parameters fik and £k can be computed. This approach frees us from having to interactively indicate representative voxels of each class, which makes our method more objective and reproducible and allows the method to be fully automated.

The classification and intensity distribution parameters are then updated using the iterative scheme based on the EM procedure as outlined above. During iterations, the atlas is further used to spatially constrain the classification by assigning its prior probability maps to the a priori class probabilities nk. Thus, the voxels are not only classified based on their intensities, but also based on their spatial position. This makes the algorithm more robust, especially when the images are corrupted with a heavy bias field.

However, in the presence of gross morphological differences between atlas and patient images, intensity-based atlas-guided pixel classification as described above, using only affine iconic matching of atlas to image, may fail. Figure 1.4 (left) shows a cross section through the brain of a patient affected by periventric-ular leukomalacia (PVL). This brain presents gross morphological differences compared to a normal brain, especially around the ventricles, which are strongly enlarged. Brain tissue segmentation of such images using affine iconic matching of atlas and patient images will fail, if the gross morphological differences between atlas and patient images are not corrected for. Indeed, in this particular case, the affinely matched atlas labels large portions of the enlarged ventricles as white matter. The initial estimate of the tissue intensity parameters (i.e. mean and variance) is thus not reliable and it is therefore unlikely that the iterative

Figure 1.4: Top row (from left to right): Original T1 MPRAGE patient image; CSF segmented by atlas-guided intensity-based tissue classification using affine registration to atlas template; CSF segmented after nonrigid matching of the atlas. Middle row: Atlas priors for gray matter, white matter, and CSF affinely matched to patient image. Bottom row: Idem after nonrigid matching.

segmentation process, which alternates between pixel classification and parameter estimation, converges to the correct segmentation solution (Fig. 1.4 (top row, middle)). Nonrigid registration of atlas andpatient images using the method presented in [24] can better cope with this extra variability. Note how the segmentation of the enlarged ventricles is much improved (see Fig. 1.4 (top row, right)).

1.1.4 Model-Based Brain Tissue Classification: Overview

The standard intensity model described in section 1.1.2 can be extended in several ways in order to better represent real MR images of the brain. The overall strategy is to build increasingly better statistical models for normal brain MR images and to detect disease-related (e.g. MS or CJD) signal abnormalities as voxels that cannot be well explained by the model. Figure 1.5 shows typical samples of the proposed models in increasing order of complexity, starting from the original mixture model, shown in Fig. 1.5(a). For each of the models, the same EM framework is applied to estimate the model parameters. An unusual aspect of the methods presented here is that all model parameters are estimated from the data itself, starting from an initialization that is obtained without user intervention. This ensures that the models retrain themselves fully automatically on each individual scan, allowing the method to analyze images with previously unseen MR pulse sequences and voxel sizes. In this way, subjective manual tuning and training is eliminated, which would make the results not fully reproducible.

The first problem for the segmentation technique of section 1.1.2 is the corruption of MR images with a smoothly varying intensity inhomogeneity or bias field [25, 26], which results in a nonuniform intensity of the tissues over the image area as shown in Fig. 1.6. This bias is inherent to MR imaging and is caused by equipment limitations and patient-induced electrodynamic interactions [26]. Although not always visible for a human observer, it can cause serious misclas-sifications when intensity-based segmentation techniques are used. In section 1.2 the mixture model is therefore extended by explicitly including a parametric model for the bias field. Figure 1.5(b) shows a typical sample of the resulting model. The model parameters are then iteratively estimated by interleaving three steps: classification of the voxels; estimation of the normal distributions; and estimation of the bias field. The algorithm is initialized with information from a digital brain atlas about the a priori expected location of tissue classes. This allows full automation of the method without need for user interaction, yielding fully objective and reproducible results.

As a consequence of the assumption that the tissue type of each voxel is independent from the tissue type of the other voxels, each voxel is classified independently, based only on its intensity. However, the intensity of some tissues surrounding the brain is often very similar to that of brain tissue, which makes a correct classification based on intensity alone impossible. Therefore, the model is further extended in section 1.3 by introducing a Markov random field (MRF) prior on the underlying tissue labels of the voxels. Such a MRF takes into account that the various tissues in brain MR images are not just randomly distributed over the image area, but occur in clustered regions of homogeneous tissue. This

Figure 1.5: Illustration of the statistical models for brain MR images used in this study. The mixture model (a) is first extended with an explicit parametric model for the MR bias field (b). Subsequently, an improved spatial model is used that takes into account that tissues occur in clustered regions in the images (c). Then the presence of pathological tissues, which are not included in the statistical model, is considered (d). Finally, a downsampling step introduces partial voluming in the model (e).

Figure 1.5: Illustration of the statistical models for brain MR images used in this study. The mixture model (a) is first extended with an explicit parametric model for the MR bias field (b). Subsequently, an improved spatial model is used that takes into account that tissues occur in clustered regions in the images (c). Then the presence of pathological tissues, which are not included in the statistical model, is considered (d). Finally, a downsampling step introduces partial voluming in the model (e).

is illustrated in Fig. 1.5(c), which shows a sample of the total resulting image model. The MRF brings general spatial and anatomical constraints into account during the classification, facilitating discrimination between tissue types with similar intensities such as brain and nonbrain tissues.

The method is further extended in section 1.4 in order to quantify lesions or disease-related signal abnormalities in the images (see Fig. 1.5(d)). Adding an explicit model for the pathological tissues is difficult because of the wide variety of their appearance in MR images, and because not every individual scan contains sufficient pathology for estimating the model parameters. These problems are circumvented by detecting lesions as voxels that are not well explained by the statistical model for normal brain MR images. Based on principles borrowed from the robust statistics literature, tissue-specific voxel weights are introduced that reflect the typicality of the voxels in each tissue type. Inclusion of these weights results in a robustized algorithm that simultaneously detects lesions as model outliers and excludes these outliers from the model parameter estimation. In section 1.5, this outlier detection scheme is applied for fully automatic segmentation of MS lesions from brain MR scans. The method is validated by comparing the automatic lesions segmentations to manual tracings by human experts.

Thus far, the intensity model assumes that each voxel belongs to one single tissue type. Because of the complex shape of the brain and the finite resolution of the images, a large part of the voxels lies on the border between two or more tissue types. Such border voxels are commonly referred to as partial volume (PV) voxels as they contain a mixture of several tissues at the same time. In order to be able to accurately segment major tissue classes as well as to detect the subtle signal abnormalities in MS, e.g., the model for normal brain MR images can be further refined by explicitly taking this PV effect into account. This is accomplished by introducing a downsampling step in the image model, adding up the contribution of a number of underlying subvoxels to form the intensity of a voxel. In voxels where not all subvoxels belong to the same tissue type, this causes partial voluming, as can be seen in Fig. 1.5(e). The derived EM algorithm for estimating the model parameters provides a general framework for partial volume segmentation that encompasses and extends existing techniques. A full presentation of this PV model is outside the scope of this chapter, however. The reader is referred to [27] for further details.

Figure 1.6: The MR bias field in a proton density-weighted image. (a) Two axial slices; (b) the same slices after intensity thresholding.

Figure 1.6: The MR bias field in a proton density-weighted image. (a) Two axial slices; (b) the same slices after intensity thresholding.

1.2 Automated Bias Field Correction

A major problem for automated MR image segmentation is the corruption with a smoothly varying intensity inhomogeneity or bias field [25,26]. This bias is inherent to MR imaging and is caused by equipment limitations and patient-induced electrodynamic interactions. Although not always visible for a human observer, as illustrated in Fig. 1.6, correcting the image intensities for bias field inhomo-geneity is a necessary requirement for robust intensity-based image analysis techniques. Early methods for bias field estimation and correction used phantoms to empirically measure the bias field inhomogeneity [28]. However, this approach assumes that the bias field is patient independent, which it is not [26]. Furthermore, it is required that the phantom's scan parameters are the same as the patient's, making this technique impractical and even useless as a retrospective bias correction method. In a similar vein, bias correction methods have been proposed for surface coil MR imaging using an analytic correction of the MR antenna reception profile [29], but these suffer from the same drawbacks as do the phantom-based methods. Another approach, using homomorphic filtering [30], assumes that the frequency spectrum of the bias field and the image structures are well separated, but this assumption is generally not valid for MR images [28, 31].

While bias field correction is needed for good segmentation, many approaches have exploited the idea that a good segmentation helps to estimate the bias field. Dawant et al. [31] manually selected some points inside white matter and estimated the bias field as the least-squares spline fit to the intensities of these points. They also presented a slightly different version where the reference points are obtained by an intermediate classification operation, using the estimated bias field for final classification. Meyer et al. [32] also estimated the bias field from an intermediate segmentation, but they allowed a region of the same tissue type to be broken up into several subregions which creates additional but sometimes undesired degrees of freedom.

Wells et al. [4] described an iterative method that interleaves classification with bias field correction based on ML parameter estimation using an EM algorithm. However, for each set of similar scans to be processed, their method, as well as its refinement by other authors [5, 6], needs to be supplied with specific tissue class conditional intensity models. Such models are typically constructed by manually selecting representative points of each of the classes considered, which may result in segmentations that are not fully objective and reproducible.

In contrast, the method presented here (and in [23]) does not require such a preceding training phase. Instead, a digital brain atlas is used with a priori probability maps for each tissue class to automatically construct intensity models for each individual scan being processed. This results in a fully automated algorithm that interleaves classification, bias field estimation, and estimation of class-conditional intensity distribution parameters.

1.2.1 Image Model and Parameter Estimation

The mixture model outlined in section 1.1.2 is extended to include a model for the bias field. We model the spatially smoothly varying bias fields as a linear combination of P polynomial basis functions $p(xj), p = 1, 2,..., P, where Xj denotes the spatial position of voxel j. Not the observed intensities yj but the bias corrected intensities Uj are now assumed to be distributed according to a mixture of class-specific normal distributions, such that Eq. (1.1) above is replaced by f (Uj I lj, ^ = G (Uj - v. )

with bc, c = 1, 2,...,C indicating the bias field parameters of MR channel c, and $ = {ik, Y,k, bc, k = 1, 2,...,K, c = 1, 2,...,C} as the total set of model parameters.

With the addition of the bias field model, estimation of the model parameters with an EM algorithm results in an iterative procedure that now interleaves three steps (see Fig. 1.7): classification of the image voxels (Eq. 1.4); estimation of the normal distributions (Eqs. (1.5) and (1.6) but with the bias corrected intensities Uj replacing the original intensities yj); and estimation of the bias field. For the unispectral case, the bias field parameters are given by the following expression3:

with

w jk

This can be interpreted as follows (see Fig. 1.8). Based on the current classification and distribution estimation, a prediction {yj, j = 1, 2,..., J} of the MR image without the bias field is constructed (Fig. 1.8(b)). A residue image R (Fig. 1.8(c)) is obtained by subtracting this predicted signal from the original image (Fig. 1.8(a)). The bias (Fig. 1.8(e)) is then estimated as a weighted least-squares fit through the residue image using the weights W (Fig. 1.8(d)), each voxel's weight being inversely proportional to the variance of the class that voxel belongs to. As can be seen from Fig. 1.8(d), the bias field is therefore computed primarily from voxels that belong to classes with a narrow intensity distribution, such as white and gray matter. The smooth spatial model extrapolates the bias a

3 For more general expressions for the multispectral case we refer to [23]; in the unispectral case yj = yj, nk = ¡ik, and = ak.

update mixture model

Figure 1.7: Adding a model for the bias field results in a three-step expectation-maximization algorithm that iteratively interleaves classification, estimation of the normal distributions, and bias field correction.

Figure 1.7: Adding a model for the bias field results in a three-step expectation-maximization algorithm that iteratively interleaves classification, estimation of the normal distributions, and bias field correction.

(a) (b) (c)

Figure 1.8: Illustration of the bias correction step on a 2D slice of a T1-weighted MR image: (a) original image; (b) predicted signal based on previous iterations; (c) residue image; (d) weights; (e) estimated bias field; (f) corrected image. (Source: Ref. [23].)

Figure 1.8: Illustration of the bias correction step on a 2D slice of a T1-weighted MR image: (a) original image; (b) predicted signal based on previous iterations; (c) residue image; (d) weights; (e) estimated bias field; (f) corrected image. (Source: Ref. [23].)

field from these regions, where it can be confidently estimated from the data, to regions where such estimate is ill-conditioned (CSF, nonbrain tissues).

1.2.2 Examples and Discussion

Examples of the performance of the method are shown in Figs. 1.9 and 1.10. Figure 1.9 depicts the classification of a high-resolution sagittal T1-weighted MR image, both for the original two-step algorithm without bias correction of section 1.1.2 and for the new three-step algorithm with bias correction. Because a relatively strong bias field reduces the intensities at the top of the head, bias correction is necessary as white matter is wrongly classified as gray matter in that area otherwise. Figure 1.10 clearly shows the efficiency of the bias correction on a 2-D multislice T1-weighted image. Such multislice images are acquired in an interleaved way, and are typically corrupted with a slice-by-slice constant intensity offset, commonly attributed to gradient eddy currents and crosstalk

Figure 1.9: Slices of a high-resolution T1-weighted MR image illustrating the performance of the method: (a) original data; (b) white matter classification without bias correction; (c) white matter classification with bias correction; (d) estimated bias field. (Source: Ref. [23].)

Figure 1.10: An example of bias correction of a T1-weighted 2-D multislice image corrupted with slice-by-slice offsets. From top row to bottom row: original data, estimated bias, corrected data. (Source: Ref. [23].)

between slices [25], and clearly visible as an interleaved bright-dark intensity pattern in a cross-section orthogonal to the slices in Fig. 1.10.

In earlier EM approaches for bias correction, the class-specific intensity distribution parameters and were determined by manual training and kept fixed during the iterations [4-6]. It has been reported [6, 33, 34] that these methods are sensitive to the training of the classifier, i.e., they produce different results depending on which voxels were selected for training. In contrast, our algorithm estimates its tissue-specific intensity distributions fully automatically on each individual scan being processed, starting from a digital brain atlas. This avoids all manual intervention, yielding fully objective and reproducible results. Moreover, it eliminates the danger of overlooking some tissue types during a manual training phase, which is typically a problem in regions surrounding the brain, consisting of several different tissues, and which may cause severe errors in the residue image and the bias field estimation [6, 34]. Guillemaud and Brady [6] proposed to model nonbrain tissues by a single class with a uniform distribution, artificially assigning the nonbrain tissue voxels a zero weight for the bias estimation. This is not necessary with our algorithm: the class distribution parameters are updated at each iteration from all voxels in the image and classes consisting of different tissue types are automatically assigned a large variance. Since the voxel weights for the bias correction are inversely proportional to the variance of the class each voxel is classified to, such tissues are therefore automatically assigned a low weight for the bias estimation.

A number of authors have proposed bias correction methods that do not use an intermediate classification. Styner, Brechbuhler et al. [33,35] find the bias field for which as many voxels as possible have an intensity in the corrected image that lies close to that of a number of predefined tissue types. Other approaches search for the bias field that makes the histogram of the corrected image as sharp as possible [36-38]. The method of Sled et al. [36] for instance is based on deconvolution of the histogram of the measured signal, assuming that the histogram of the bias field is Gaussian, while Mangin [37] and Likar et al. [38] use entropy to measure the histogram sharpness. Contrary to our approach, these methods treat all voxels alike for bias estimation. This looks rather unnatural, since it is obvious that the white matter voxels, which have a narrow intensity histogram, are much more suited for bias estimation than, for instance, the tissues surrounding the brain or ventricular CSF. As argued above, our algorithm takes this explicitly into account by the class-dependent weights assigned to each voxel. Furthermore, lesions can be so large in a scan of a MS patient that the histogram of the corrected image may be sharpest when the estimated bias field follows the anatomical distribution of the lesions. As will be shown in section 1.4, our method can be made robust for the presence of such pathologic tissues in the images, estimating the bias field only from normal brain tissue.

1.3 Modeling Spatial Context

As a consequence of the assumption that the tissue type of each voxel is independent from the tissue type of the other voxels, each voxel is classified independently based on its intensity. This yields acceptable classification results as long as the different classes are well separated in intensity feature space, i.e. have a clearly discernible associated intensity distribution. Unfortunately, this is not always true for MR images of the brain, especially not when only one MR channel is available. While white matter, gray matter, and CSF usually have a characteristic intensity, voxels surrounding the brain often show an MR intensity that is very similar to brain tissue. This may result in erroneous classifications of small regions surrounding the brain as gray matter or white matter.

We therefore extend the model with a Markov random field prior (MRF), introducing general spatial and anatomical constraints during the classification [39]. The MRF is designed to facilitate discrimination between brain and nonbrain tissues while preserving the detailed interfaces between the various tissue classes within the brain.

1.3.1 Regularization Using a MRF Model

Previously, it was assumed that the label lj of each voxel is drawn independently from the labels of the other voxels, leading to Eq. (1.2) for the prior probability distribution for the underlying label image L. Now a more complex model for L is used, more specifically a MRF. Such a MRF model assumes that the probability that voxel j belongs to tissue type k depends on the tissue type of its neighbors. The Hammersley-Clifford theorem states that such a random field is a Gibbs random field (see [40] and the references therein), i.e. its configurations obey a Gibbs distribution f (L | $) = Z($)-1 exp[-U(L | $)]

where Z($) = SL exp[-U(L | $)] is a normalization constant called the partition function and U(L | $) is an energy function dependent on the model parameters $.

A simple MRF is used that is defined on a so-called first-order neighborhood system, i.e. only the six nearest neighbors on the 3-D-image lattice are used. Let Nj denote the set of the four neighbors of voxel j in the plane and Nj0 its two neighbors out of the plane. Since the voxel size in the 2 direction is usually different from the within-plane voxel size in MR images, the following Potts model (the extension of the binary Ising model [41] to more than two classes)

is used:

0 0

Post a comment