ULi 1 E E j E j

Here, the MRF parameters and vkk denote the cost associated with transition from class k to class k' among neighboring voxels in the plane and out of the plane, respectively. If these costs are higher for neighbors belonging to different classes than for neighbors of the same tissue class, the MRF favors configurations of L where each tissue is spatially clustered. An example of this is shown in Fig. 1.5(c).

It has been described in the literature that fine structures, such as the interface between white matter and gray matter in the brain MR images, can be erased by the Potts/Ising MRF model [5, 42]. The MRF may overregular-ize such subtle borders and tempt to produce nicely smooth interfaces. Therefore, a modification is used that penalizes anatomically impossible combinations such as a gray matter voxel surrounded by nonbrain tissues, while at the same time preserving edges between tissues that are known to border each other. We impose that a voxel surrounded by white matter and gray matter voxels must have the same a priori probability for white matter as for gray matter by adding appropriate constraints on the MRF transition costs %kk! and vkk. As a result, voxels surrounded by brain tissues have a low probability for CSF and other nonbrain tissues, and a high but equal probability for white and gray matter. The actual decision between white and gray matter is therefore based only on the intensity, so that the interface between white and gray matter is unaffected by the MRF. Similar constraints are applied for other interfaces as well.

Since the voxel labels are not independent with this model, the expectation step of the EM algorithmno longer yields the classification of equation 4. Because of the interaction between the voxels, the exact calculation of f (lj | Y, $(m-1)) involves calculation of all the possible realizations of the MRF, which is not computationally feasible. Therefore, an approximation was adopted that was proposed by Zhang [43] and Langan et al. [44], based on the mean field theory from statistical mechanics. This mean field approach suggests an approximation to f (lj i Y, <p(m-1)) based on the assumption that the influence of l j of all other voxels j' = j in the calculation of f (lj | Y, <p(m-1)) can be approximated by the influence of their classification f (j | Y, <p(m-2)) from the previous iteration.

This yields

, n f (y | lj, $(m-1)) ■ nf-1)(lj) f (lj | Y, $(m-1)) «- j j--—j-— (1.7)

for the classification, where j-1)(lj) ~ exp ( - £ £ f (lj, = k | Y, $(m-2)) ■ j \ j'eNf k

The difference with Eq. (1.4) lies herein that now the a priori probability that a voxel belongs to a specific tissue class depends on the classification of its neighboring voxels.

With the addition of the MRF, the subsequent maximization step in the EM algorithm not only involves updating the intensity distributions and recalculating the bias field, but also estimating the MRF parameters {^kk>} and {vkk }. As a result, the total iterative scheme now consists in four steps, shown in Fig. 1.11.

Figure 1.11: The extension of the model with a MRF prior results in a four-step algorithm that interleaves classification, estimation of the normal distributions, bias field correction, and estimation of the MRF parameters.

The calculation of the MRF parameters poses a difficult problem for which a heuristic, noniterative approach is used. For each neighborhood configuration (Np, No), the number of times that the central voxel belongs to class k in the current classification is compared to the number of times it belongs to class k', for every couple of classes (k, k'). This results in an overdetermined linear system of equations that is solved for the MRF parameters (¡^kk>, vkk) using a least squares fit procedure [40].

1.3.2 Example

Figure 1.12 demonstrates the influence of each component of the algorithm on the resulting segmentations of a Tl-weighted image. First, the method of section 1.2, where each voxel is classified independently, was used without the bias

Figure 1.12: Example of how the different components of the algorithm work. From left to right: (a) T1-weighted image, (b) gray matter segmentation without bias field correction and MRF, (c) gray matter segmentation with bias field correction but without MRF, (d) gray matter segmentation with bias field correction and MRF, and (e) gray matter segmentation with bias field correction and MRF without constraints. (Source: Ref. [39].)

Figure 1.12: Example of how the different components of the algorithm work. From left to right: (a) T1-weighted image, (b) gray matter segmentation without bias field correction and MRF, (c) gray matter segmentation with bias field correction but without MRF, (d) gray matter segmentation with bias field correction and MRF, and (e) gray matter segmentation with bias field correction and MRF without constraints. (Source: Ref. [39].)

correction step (Fig. 1.12(b)). It can be seen that white matter at the top of the brain is misclassified as gray matter. This was clearly improved when the bias field correction step was added (Fig. 1.12(c)). However, some tissues surrounding the brain have intensities that are similar to brain tissue and are wrongly classified as gray matter. With the MRF model described in section 1.3.1, a better distinction is obtained between brain tissues and tissues surrounding the brain (Fig. 1.12(d)). This is most beneficial in case of single-channel MR data, where it is often difficult to differentiate such tissues based only on their intensity. The MRF cleans up the segmentations of brain tissues, while preserving the detailed interface between gray and white matter, and between gray matter and CSF. Figure 1.13 depicts a 3-D volume rendering of the gray matter segmentation map when the MRF is used.

To demonstrate the effect of the constraints on the MRF parameters and vkk> described in section 1.3.1, the same image was processed without such constraints (Fig. 1.12(e)). The resulting segmentation shows nicely distinct regions, but small details, such as small ridges of white matter, are lost. The MRF prior has overregularized the segmentation and should therefore not be used in this form.

Figure 1.13: A 3-D volume rendering of the gray matter segmentation of the data of Fig. 1.12 with bias field correction and MRF. (Source: Ref. [39].)

1.3.3 Validation and Conclusions

The method was validated on simulated MR images of the head that were generated by the BrainWeb MR simulator [45], for varying number of MR channels, noise, and severity of the bias fields. The automatic segmentations of each tissue k were compared with the known ground truth by calculating the similarity index

Vk + V2k where Vk2 denotes the volume of the voxels classified as tissue k by both raters, and Vk and V2k the volume of class k assessed by each of the raters separately. This metric, first described by Dice [46] and recently re-introduced by Zijdenbos et al. [47], attains the value of 1 if both segmentations are in full agreement, and 0 if there is no overlap at all. For all the simulated data, it was found that the total brain volume was accurately segmented, but the segmentation of gray matter and white matter individually did generally not attain the same accuracy. This was caused by misclassification of the white matter-gray matter interface, where PV voxels do not belong to either white matter or gray matter, but are really a mixture of both.

The automated method was also validated by comparing its segmentations of nine brain MR scans of children to the manual tracings by a human expert. The automated and manual segmentations showed an excellent similarity index of 95% on average for the total brain, but a more moderate similarity index of 83% for gray matter. Figure 1.14 depicts the location of misclassified gray matter voxels for a representative dataset. It can be seen that the automatic algorithm segments the gray matter-CSF interface in more detail than the manual tracer. Some tissue surrounding the brain is still misclassified as gray matter, although this error is already reduced compared to the situation where no MRF prior is used. However, by far most misclassifications are due to the classification of gray-white matter PV voxels to gray matter by the automated method. The human observer has segmented white matter consistently as a thicker structure than the automatic algorithm.

Finally, Park et al. [48] tested the method presented in this section on a database of T1-weighted MR scans of 20 normal subjects. These data along with expert manual segmentations are made publicly available on the WWW by the

Figure 1.14: Comparison between manual delineation and automated tissue classification on a representative dataset. From left to right: (a) axial and coronal slice, (b) corresponding manual segmentation of gray matter, (c) automatic segmentation of gray matter without MRF prior, (d) automatic segmentation of gray matter with MRF, and (e) difference between manual and automatic segmentation with MRF shown in white. (Source: Ref. [39].)

Figure 1.14: Comparison between manual delineation and automated tissue classification on a representative dataset. From left to right: (a) axial and coronal slice, (b) corresponding manual segmentation of gray matter, (c) automatic segmentation of gray matter without MRF prior, (d) automatic segmentation of gray matter with MRF, and (e) difference between manual and automatic segmentation with MRF shown in white. (Source: Ref. [39].)

Center for Morphometric Analysis at the Massachusetts General Hospital.4 The segmented image for each case was compared with the corresponding expert manual segmentation by means of an overlap metric. The overlap measure is simply the intersection of two coregistered images divided by their union. Thus, similar pairs of images approach an overlap value of 1, while dissimilar pairs approach 0. Overall, the method presented here outperformed the other listed automated methods (see [49] for details on these other methods) with an average overlap measure of 0.681 and 0.708 for GM and WM, respectively (compare to a more recent result: (0.662, 0.683) in [50]). It performed especially better on the difficult cases where there were significant bias fields associated with the image.

Partial voluming violates the model assumption that each voxel belongs to only one single class. In reality, PV voxels are a mixture of tissues and every segmentation method that tries to assign them exclusively to one class is doomed to fail. The problem is especially important in images of the brain since the interface between gray and white matter is highly complex, which results in a high

4 http://www.cma.mgh.harvard.edu/ibsr.

volume of PV voxels compared to the volume of pure tissue voxels. Misclas-sification of this thin interface therefore gives immediate rise to considerable segmentation errors [51].

1.4 Model Outliers and Robust Parameter Estimation

So far, only the segmentation of MR images of normal brains has been addressed. In order to quantify MS lesions or other neuropathology related signal abnormalities in the images, the method needs to be further extended. Adding an explicit model for the pathological tissues is difficult because of the wide variety of their appearance in MR images, and because not every individual scan contains sufficient pathology for estimating the model parameters. These problems can be circumvented by detecting lesions as voxels that are not well explained by the statistical model for normal brain MR images [52]. The method presented here simultaneously detects the lesions as model outliers, and excludes these outliers from the model parameter estimation.

1.4.1 Background

Suppose that J samples yj, j = 1, 2,..., J are drawn independently from a multivariate normal distribution with mean ¡i and covariance matrix £ that are grouped in $ = {fi, £} for notational convenience. Given these samples, the ML parameters $ can be assessed by maximizing

j which yields

In most practical applications, however, the assumed normal model is only an approximation to reality, and estimation of the model parameters $ should not be severely affected by the presence of a limited amount of model outliers.

Considerable research efforts in the field of robust statistics [53] have resulted in a variety of methods for robust estimation of model parameters in the presence of outliers, from which the so-called M-estimators [53] present the most popular family.

Considering Eq. 1.9, it can be seen that the contribution to the loglikeli-hood of an observation that is atypical of the normal distribution is high, since lim f(y | ®)^olog f (yi $) = —cc The idea behind M-estimators is to alter Eq. (1.9) slightly in order to reduce the effect of outliers. A simple way to do this, which has recently become very popular in image processing [54] and medical image processing [6, 16, 17, 55], is to model a small fraction of the data as being drawn from a rejection class that is assumed to be uniformly distributed. It can be shown that assessing the ML parameters is now equivalent to maximizing

j with respect to the parameters $, where X is an a priori chosen threshold [54]. Since limf(y | log(f (y | $) + X) = log(X), the contribution of atypical observations on the log-likelihood is reduced compared to Eq. (1.9).

One possibility to numerically maximize Eq. (1.11) is to calculate iteratively the weights f (U I $(m—1))

based on the parameter estimation $(m—1) in iteration (m — 1), and subsequently update the parameters $(m) accordingly:

,m) = TjKUj | *(m—1°)■ Uj J2jt(yj | &m—1))

wm)_ jUj | 1°) ■ (Uj — M(m))(^j- — M(m))t (113)

Solving an M-estimator by iteratively recalculating weights and updating the model parameters based on these weights is commonly referred to as the W-estimator [56]. The weight t(Uj | $) e [0, 1] reflects the typicality of sample i with respect to the normal distribution. For typical samples, t(Uj | $) ~ 1, whereas t(Uj | $) ~ 0 for samples that deviate far from the model. Comparing Eq. (1.13) with Eq. (1.10), it can therefore be seen that the M-estimator effectively down-weights observations that are atypical for the normal distribution, making the parameter estimation more robust against such outliers.

1.4.2 From Typicality Weights to Outlier Belief Values

Since each voxel j has only a contribution of t(yj | $) to the parameter estimation, the remaining fraction

reflects the belief that it is a model outlier. The ultimate goal in our application is to identify these outliers as they are likely to indicate pathological tissues. However, the dependence of Eq. (1.14) through t(yj | $) on the determinant of the covariance matrix £ prevents its direct interpretation as a true outlier belief value.

In statistics, an observation y is said to be abnormal with respect to a given normal distribution if its so-called Mahalanobis distance d = y (y - M)t £-1(y - M)

exceeds a predefined threshold. Regarding Eq. (1.12), the Mahalanobis distance at which the belief that a voxel is an outlier exceeds the belief that it is a regular sample decreases with increasing |£|. Therefore, the Mahalanobis distance threshold above which voxels are considered abnormal changes over the iterations as £ is updated. Because of this problem, it is not clear how X should be chosen.

Therefore, Eq. (1.12) is modified into f (u I $(m-1))

where | S | is explicitly taken into account and where X is reparameterized using the more easily interpretable k . This k > 0 is an explicit Mahalanobis distance threshold that specifies a statistical significance level, as illustrated in Fig. 1.15. The lower k is chosen, the easier voxels are considered as outliers. On the other hand, choosing k = <x results in t(yj | $(m-1)) = 1, Vj which causes no outliers to be detected at all.

Figure 1.15: The threshold k defines the Mahalanobis distance at which the belief that a voxel is a model outlier exceeds the belief that it is a regular sample (this figure depicts the unispectral case, where £ = a2).

abnormal normal abnormal

Figure 1.15: The threshold k defines the Mahalanobis distance at which the belief that a voxel is a model outlier exceeds the belief that it is a regular sample (this figure depicts the unispectral case, where £ = a2).

1.4.3 Robust Estimation of MR Model Parameters

Based on the same concepts, the EM framework used in the previous sections for estimating the parameters of models for normal brain MR images can be extended to detect model outliers such as MS lesions. In the original EM algorithm, a statistical classification f (lj | Y, $(m-1)) is performed in the expectation step, and the subsequent maximization step involves updating the model parameters according to this classification. The weights f (lj = k | Y, $(m-1)), k = 1, 2,...,K represent the degree to which voxel j belongs to each of the K tissues. However, since k f (lj = k | Y, $(m-1)) = 1, an observation that is atypical for each of the normal distributions cannot have a small membership value for all tissue types simultaneously.

A similar approach as the one described above, where Eq. (1.9) was replaced with the more robust Eq. (1.11) and solved with a W-estimator, results in a maximization step in which model outliers are down-weighted. The resulting equations for updating the model parameters are identical to the original ones, provided that the weights f (lj | Y, $(m-1)) are replaced everywhere with a combination of two weights f (lj | Y, $(m-1)) ■ t(yj | lj, $(m-1)), where f (yj | lj, $m-1>)

reflects the degree of typicality of voxel j in tissue class lj. Since J2k f (lj = k | Y, $(m-1>) ■ t(yj | lj = k, 1)) is not constrained to be unity, model outliers can have a small degree of membership in all tissue classes simultaneously. Therefore, observations that are atypical for each of the K tissue types have a reduced weight on the parameter estimation, which robustizes the EM-procedure. Upon convergence of the algorithm, the belief that voxel j is a model outlier is given by

1 - E f (lj = k I Y, $) ■ t(yj | lj = k, $) (1.16)

Section 1.5 discusses the use of this outlier detection scheme for fully automated segmentation of MS lesions from brain MR images.

1.5 Application to Multiple Sclerosis

In [52], the outlier detection scheme of section 1.4 was applied for fully automatic segmentation of MS lesions from brain MR scans that consist of T1-, T2-, and PD-weighted images. Unfortunately, outlier voxels also occur outside MS lesions. This is typically true for partial volume voxels that, in contravention to the assumptions made, do not belong to one single tissue type but are rather a mixture of more than one tissue. Since they are perfectly normal brain tissue, they are prevented from being detected as MS lesion by introducing constraints on intensity and context on the weights t(yj | lj, $) calculated in Eq. (1.15).

1.5.1 Intensity and Contextual Constraints

• Since MS lesions appear hyperintense on both the PD- and the T2-weighted images, only voxels that are brighter than the mean intensity of gray matter in these channels are allowed to be outliers.

• Since around 90-95% of the MS lesions are white matter lesions, the contextual constraint is added that MS lesions should be located in the vicinity of white matter. In each iteraction, the normal white matter is fused with the lesions to form a mask of the total white matter. Using a MRF as in section 1.3, a voxel is discouraged from being classified as MS lesion in the absence of neighboring white matter. Since the MRF parameters are estimated from the data in each iteration as in section 1.3, these contextual constraints automatically adapt to the voxel size of the data.

Figure 1.16: The complete method for MS lesion segmentation iteratively interleaves classification of the voxels into normal tissue types, MS lesion detection, estimation of the normal distributions, bias field correction, and MRF parameter estimation.

update mixture model

Figure 1.16: The complete method for MS lesion segmentation iteratively interleaves classification of the voxels into normal tissue types, MS lesion detection, estimation of the normal distributions, bias field correction, and MRF parameter estimation.

The complete method is summarized in Fig. 1.16. It iteratively interleaves statistical classification of the voxels into normal tissue types, assessment of the belief for each voxel that it is not part of an MS lesion based on its intensity and on the classification of its neighboring voxels, and, only based on what is considered as normal tissue, estimation of the MRF, intensity distributions, and bias field parameters. Upon convergence, the belief that voxel j is part of an MS lesion is obtained by Eq. (1.16). The method is fully automated, with only one single parameter that needs to be experimentally tuned: the Mahalanobis threshold k in Eq. (1.15). A 3-D rendering of the segmentation maps including the segmentation fo MS lesions is shown in Fig. (1.17).

1.5.2 Validation

As part of the BIOMORPH project [57], we analyzed MR data acquired during a clinical trial in which 50 MS patients were repeatedly scanned with an interval of approximately 1 month over a period of about 1 year. The serial image data consisted at each time point of a PD/T2-weighted image pair and

Figure 1.17: A 3-D rendering of (a) gray matter, (b) white matter, and (c) MS lesion segmentation maps. (Color slide)

Figure 1.17: A 3-D rendering of (a) gray matter, (b) white matter, and (c) MS lesion segmentation maps. (Color slide)

a T1-weighted image with 5 mm slice thickness. From 10 of the patients, two consecutive time points were manually analyzed by a human expert who traced MS lesions based only on the T2-weighted images. The automatic algorithm was repeatedly applied with values of the Mahalanobis distance k varying from 2.7 (corresponding to a significance level of p = 0.063) to 3.65 (corresponding to p = 0.004), in steps of 1.05. The automatic delineations were compared with the expert segmentations by comparing the so-called total lesion load (TLL), measured as the number of voxels that were classified as MS lesion, on these 20 scans. The TLL value calculated by the automated method decreased when k was increased, since the higher the k , the less easily voxels are rejected from the model. Varying k from 2.7 to 3.65 resulted in an automatic TLL of respectively 150% to only 25% of the expert TLL. However, despite the strong influence of k on the absolute value of the TLL, the linear correlation between the automated TLLs of the 20 scans and the expert TLLs was remarkable insensitive to the choice of k . Over this wide range, the correlation coefficient varied between 0.96 and 0.98.

Comparing the TLL of two raters does not take into account any spatial correspondence of the segmented lesions. We therefore calculated the similarity index defined in Eq. (1.8), which is simply the volume of intersection of the two segmentations divided by the mean of the two segmentation volumes. For the 20 scans, Fig. 1.18(a) depicts the value of this index for varying k , both with and without the bias correction step included in the algorithm, clearly demonstrating the need for bias field correction. The best correspondence, with a similarity index of 0.45, was found for k ~ 3. For this value of k , the automatic TLL was virtually equal to the expert TLL, and therefore, a similarity index of similarity index vs. Mahalanobis distance similarity index vs. Mahalanobis distance

8000

7000

6000

g 5000 o

total lesion load: manually vs. automatically total lesion load: manually vs. automatically

Figure 1.18: (a) Similarity index between the automatic and the expert lesion delineations on 20 images for varying k, with and without the bias field correction component enabled in the automated method. (b) The 20 automatic total lesion load measurements for k = 3 shown along with the expert measurements. (Source: Ref. [52].)

0.45 means that less than half of the voxels labeled as lesion by the expert were also identified by the automated method and vice versa.

For illustration purposes, the expert TLLs of the 20 scans are depicted along with the automatic ones for k = 3 in Fig. 1.18(b). A paired t test did not reveal a significant difference between the manual and these automatic TLL ratings (p = 0.94). Scans 1 and 2 are two consecutive scans from one patient, 3 and 4 from the next and so on. Note that in 9 out of 10 cases, the two ratings agree over the direction of the change of the TLL over time. Figure 1.19 displays the MR data of what is called scan 19 in Fig. 1.18(b) and the automatically calculated classification along with the lesion delineations performed by the human expert.

1.5.3 Discussion

Most of the methods for MS lesion segmentation described in the literature are semiautomated rather than fully automated methods, designed to facilitate the tedious task of manually outlining lesions by human experts, and to reduce the inter- and intrarater variability associated with such expert segmentations. Typical examples of user interaction in these approaches include accepting or rejecting automatically computed lesions [58] or manually drawing regions of

(a) (b) (c)
(d) (e) (f)

Figure 1.19: Automatic classification of one of the 20 serial datasets that were also analyzed by a human expert. (a) Tl-weighted image; (b) T2-weighted image; (c) PD-weighted image; (d) white matter classification; (e) gray matter classification; (f) CSF classification; (g) MS lesion classification; (h) expert delineation of the MS lesions. (Source: Ref. [52].)

Figure 1.19: Automatic classification of one of the 20 serial datasets that were also analyzed by a human expert. (a) Tl-weighted image; (b) T2-weighted image; (c) PD-weighted image; (d) white matter classification; (e) gray matter classification; (f) CSF classification; (g) MS lesion classification; (h) expert delineation of the MS lesions. (Source: Ref. [52].)

pure tissue types for training an automated classifier [58-61]. While these methods have proven to be useful, they remain impractical when hundreds of scans need to be analyzed as part of a clinical trial, and the variability of manual tracings is not totally removed. In contrast, the method presented here is fully automated as it uses a probabilistic brain atlas to train its classifier. Furthermore the atlas provides spatial information that avoids nonbrain voxels from being classified as MS lesion, making the method work without the often-used tracing of the intracranial cavity in a preprocessing step [58-63].

A unique feature of our algorithm is that it automatically adapts its intensity models and contextual constraints when analyzing images that were acquired with a different MR pulse sequence or voxel size. Zijdenbos et al. described [64] and validated [65] a fully automated pipeline for MS lesion segmentation based on an artificial neural network classifier. Similarly, Kikinis, Guttmann et al. [62, 66] have developed a method with minimal user intervention that is built on the EM classifier of Wells et al. [4] with dedicated pre- and postprocessing steps. Both methods use a fixed classifier that is trained only once and that is subsequently used to analyze hundreds of scans. In clinical trials, however, interscan variations in cluster shape and location in intensity space cannot be excluded, not only because of hardware fluctuations of MR scanners over a period of time, but also because different imagers may be used in a multicenter trial [66]. In contrast to the methods described above, our algorithm retrains its classifier on each individual scan, making it adaptive to such contrast variations.

Often, a postprocessing step is applied to automatically segmented MS lesions, in which false positives are removed based on a set of experimentally tuned morphologic operators, connectivity rules, size thresholds, etc. [59,60,62]. Since such rules largely depend on the voxel size, they may need to be retuned for images with a different voxel size. Alternatively, images can be resampled to a specific image grid before processing, but this introduces partial volum-ing that can reduce the detection of lesions considerably, especially for small lesion loads [66]. To avoid these problems, we have added explicit contextual constraints on the iterative MS lesions detection that automatically adapt to the voxel size. Similar to other methods [59, 61, 63, 64], we exploit the knowledge that the majority of MS lesions occurs inside white matter. Our method fuses the normal white matter with the lesions in each iteration, producing, in combination with MRF constraints, a prior probability mask for white matter that is automatically updated during the iterations. Since the MRF parameters are reestimated for each individual scan, the contextual constraints automatically adapt to the voxel size of the images.

Although the algorithm we present is fully automatic, an appropriate Maha-lanobis distance threshold k has to be chosen in advance. When evaluating the role of k , a distinction has to be made between the possible application areas of the method. In clinical trials, the main requirement for an automated method is that its measurements change in response to a treatment in a manner proportionate to manual measurements, rather than having an exact equivalence in the measurements [67, 68]. In section 1.5.2 it was shown that the automatic measurements always kept changing proportionately to the manual measurements for a wide range of k, with high correlation coefficients between 0.96 and 0.98. Therefore, the actual choice of k is fairly unimportant for this type of application. However, the role of k is much more critical when the goal is to investigate the basic MS mechanisms or time correlations of lesion groups in MS time series, as these applications require that the lesions are also spatially correctly detected. In general, the higher the resolution and the better the contrast between lesions and unaffected tissue in the images, the easier MS lesions are detected by the automatic algorithm and the higher k should be chosen. Therefore, the algorithm presumably needs to be tuned for different studies, despite the automatic adaptation of the tissue models and the MRF parameters to the data.

1.6 Application to Epilepsy

Epilepsy is the most frequent serious primary neurological illness. Around 30% of the epilepsy patients have epileptic seizures that are not controlled with medication. Epilepsy surgery is by far the most effective treatment for these patients. The aim of any presurgical evaluation in epilepsy surgery is to delineate the epileptogenic zone as accurate as possible. The epileptogenic zone is that part of the brain that has to be removed surgically in order to render the patient seizure-free.

We applied the framework presented in this chapter to detect structural abnormalities related to focal cortical dysplasia (FCD) epileptic lesions in the cortical and subcortical grey matter in high-resolution MR images of the human brain. FCD is characterized by a focal thickening of the cerebral cortex, loss of definition between the gray and the white matter at the site of the lesion, and a hypointense Tl-weighted MR signal in the gray matter. The approach is volumetric: feature images isomorphic to the original MR image are generated, representing the spatial distribution of grey matter density or, following Antel et al. [69, 70], related features such as the ratio of cortical thickness over local intensity gradient. Since these feature images show consistently thick regions in certain parts of the normal anatomy (e.g. cerebellum), the specificity (reduction of the number of false responses) of intrasubject detection of epileptogenic lesions can be increased by comparing the feature response images of patients with that of a group of normal controls. We used the machinery of statistical parametric mapping (SPM) [71], as standard in functional imaging studies, to make voxel-specific inferences.

First, each 3-D MR image is automatically segmented (using the method presented in this chapter) into grey matter (GM), white matter (WM), and cere-brospinal fluid (CSF), resulting in an image representing the individual spatial distribution of GM. The statistical priors (Fig. 1.3) for each tissue class are warped to each subject using a nonrigid multimodal free-form (involving many degrees of freedom) registration algorithm [24]. Segmentation using a combination of intensity-based tissue classification and geometry-based atlas registration helps in reducing the misclassification of extra-cerebral tissues as gray matter and aids in the reduction of false positive findings during the statistical analysis. The gray and white matter continuous classification are binarized by deterministically assigning each voxel to the class for which it has a maximum probability of occupancy among all classes considered. Next, we estimate the cortical thickness by the method described in [72]. The method essentially solves an Eikonal PDE in the domain of the gray matter and gives the cortical thickness at each GM voxel as the sum of the minimum euclidean distance of the voxel to the GM-WM interface and the GM-CSF interface. Following the method of [69, 70], we generate the feature maps, for each subject (normals and patients), by dividing the cortical thickness by the signal gradient in the gray matter region. Next, each feature image is geometrically normalized to a standard space using a 12 parameter affine MMI registration [21]. Subsequently, individual feature maps of patients are compared to those of 64 control subjects in order to detect statistically significant clusters of abnormal feature map values.

Figure 1.20 shows three orthogonal views of overlays of clusters of significantly abnormal grey matter voxels in a 3-D MR image of a single epileptic patient.

I vr J r , \

" <

N

f

c;

1

MM

S>

8 6 4 2 0

1

Figure 1.20: Cross-sectional overlay of the detected focal cortical thickening locus. The colour scale increases with statistical significance. Significance is measures by comparison of feature measurements (cortical thickness over intensity gradient) to a control subject database (Color Slide).

1.7 Application to Schizophrenia

Several studies have reported morphological differences in the brains of schizophrenic patients when compared to normal controls [73], such as enlargement of the cerebral ventricles and decreased or reversed cerebral asymmetries. These findings suggest the presence of structural brain pathology in schizophrenia. Some hypotheses have been proposed about schizophrenia as a syndrome of anomalies in normal brain development [74] whose origin may be genetically determined. Characterization of the morphological processes in schizophrenia may lead to new pharmacological treatments aimed at prevention and cure rather than suppression of symptoms. There is in addition a particular focus on asymmetry as the defining characteristic of the human brain and the correlate of language. Techniques are required for describing and quantifying cerebral asymmetries to determine where these are located, how variable they are between individuals, and how the distribution in individuals with schizophrenia differs from that in the general population.

As an application of the framework proposed in this chapter, we present here a method for fully automatic quantification of cerebral grey and white matter asymmetry from MR images. White and grey matter are segmented after bias correction by the intensity-based tissue classification algorithm presented in section 1.2. Separation of the computed white and grey matter probability maps into left and right halves is achieved by nonrigid registration of the study image to a template MR image in which left and right hemispheres have been carefully segmented. The delineations of left and right hemispheres in the template image were transformed into binary label images, which were subsequently edited by morphological operations to match the brain envelope rather than the individual gyri and sulci to be more robust against differences in local cortical topology. The template image is matched to the study image by a combination of affine [21] and locally nonrigid transformations [24]. The resulting deformation field is then applied to the label images, such that matched outlines of left and right hemispheres are obtained. These are used to separate left and right halves in the original grey and white matter segmentations and, at the same time, to remove nonrelevant structures such as the cerebellum and brain stem. Finally, volumes for grey and white matter for each half of the brain separately are computed by integrating the corresponding probability maps within the brain regions of interest defined by the matched template image. Figure 1.21 illustrates that the grey and white matter segmentation maps obtained from the original images are correctly split in separate maps for left and right hemispheres by nonrigid registration with the labelled template image.

Various authors have presented alternative techniques to separate the brain hemispheres by the so-called midsagittal plane, defined as the plane that best fits the interhemispheric fissure of the brain [76] or as the plane that maximizes similarity between the image and its reflection relative to this plane [77, 78]. The advantage of the approach of intensity-driven non-rigid registration to a labelled template image, as presented here, is that it does not assume the boundary

(a)

Figure 1.21: Left-right hemisphere grey matter and white matter separation on high quality high resolution data, consisting of a single sagittal T1-weighted image (Siemens Vision 1.5Tscanner, 3D MPRAGE, 256*256matrix, 1.25mmslice thickness, 128 slices, FOV = 256 mm, TR = 11.4 ms, TE = 4.4 ms) with good contrast between grey matter, white matter, and the tissues surrounding the brain. (a): Coronal sections through the grey matter segmentation map before (top row) and after left-right and cerebrum-cerebellum separation. (b) and (c): 3D Rendering of the segmented white and grey matter respectively. (Source: Ref. [75].)

Figure 1.21: Left-right hemisphere grey matter and white matter separation on high quality high resolution data, consisting of a single sagittal T1-weighted image (Siemens Vision 1.5Tscanner, 3D MPRAGE, 256*256matrix, 1.25mmslice thickness, 128 slices, FOV = 256 mm, TR = 11.4 ms, TE = 4.4 ms) with good contrast between grey matter, white matter, and the tissues surrounding the brain. (a): Coronal sections through the grey matter segmentation map before (top row) and after left-right and cerebrum-cerebellum separation. (b) and (c): 3D Rendering of the segmented white and grey matter respectively. (Source: Ref. [75].)

between both hemispheres to be planar. We refer the reader to [75] for further details and discussion of the results obtained.

0 0

Post a comment