Preprocessing

6.1 Image Registration

The concept of voxel-based multispectral image segmentation requires anatomically correct alignment of the data sets acquired within different image acquisition procedures. As pointed out in Section 5 for MRI data sets of the human brain, this may already be achieved during the acquisition procedure itself by stabilizing the subject's head position and applying a constant field of view in the different MRI sequences. In general, this will be sufficient in order to obtain an acceptable co-registration of the different MRI data sets. Nevertheless, there may be situations in which motion artifacts cannot be avoided.

In these cases additional image registration techniques have to be applied. Registration methods can be classified with regard to the level of human interaction required within the procedure (see, e.g., [24,27]):

(1) Manual interactive registration by a human observer.

(2) Semi automatic procedures that require a lesser amount of human interaction. An example is so-called "mark-and-link'' methods in which a human observer has to identify corresponding anatomical landmarks serving as reference points for the registration procedure.

(3) Fully automatic procedures that do not require any human working power (e.g., [18,29,4]): These methods are frequently applied for the superposition of data sets obtained in different medical imaging modalities such as MRI, PET, SPECT, or CT in order to make use of the complementary diagnostic information provided by the different modalities.

The subject of registration is addressed in detail in the Registration section of this Handbook. Figure 5 shows an example for manual interactive registration: For this purpose, the information of two data sets has to be condensed within a single image. To obtain simultaneous visibility of image information, the gray levels of the first data set are represented by the pixel intensity of the merged image, whereas the gray levels of the second data set are represented by the pixel color. In Fig. 5 a T2 weighted image (Fig. 5a) and a T1 weighted image (Fig. 5b) are superimposed in Fig. 5c. Misalignment of the anatomical structures can be identified. The T1 weighted image is then moved by translation and rotation under interactive visual

FIGURE 5 Interactive matching of two corresponding images. (a) Gray level representation of a T2 weighted image (reference image). (b) Gray level representation of a T1 weighted image, (c) Superposition of the two images; in practice, the T2 weighted image is represented by the color of the merged image, whereas the T1 weighted image is represented by the pixel intensity; here, only black-and-white representations are shown. Misalignment with respect to corresponding anatomical features can clearly be identified. (d) The T1 weighted image is moved in order to obtain a correct match with the T2 weighted image.

FIGURE 5 Interactive matching of two corresponding images. (a) Gray level representation of a T2 weighted image (reference image). (b) Gray level representation of a T1 weighted image, (c) Superposition of the two images; in practice, the T2 weighted image is represented by the color of the merged image, whereas the T1 weighted image is represented by the pixel intensity; here, only black-and-white representations are shown. Misalignment with respect to corresponding anatomical features can clearly be identified. (d) The T1 weighted image is moved in order to obtain a correct match with the T2 weighted image.

FIGURE 6 Presegmentation by masking of extracerebral structures. (a) Original image. (b) Presegmented image.

control of a human observer until a correct anatomical alignment is achieved (Fig. 5d) with respect to the T2 weighted image.

6.2 Presegmentation

After correct anatomical registration of the n image data sets, an additional preprocessing step can be performed: All the extracerebral structures that are not required for the tissue classification task should be excluded from the data set.

Figure 6a shows a T1 weighted image of a coronal cross-section through the head. Besides the brain, various other structures can be identified, such as the skull and the pharynx. By defining a mask, these extracerebral structures are removed. Finally, only the structures belonging to the brain remain in the data set (Fig. 6b).

The restriction to brain structures by excluding all the voxels in the surrounding tissue structures provides important advantages for the subsequent segmentation task:

(1) Vector quantization is only restricted to voxels that are relevant with respect to the segmentation task. The resulting codebook thus represents the gray level distribution of the brain voxels without the contribution of irrelevant extracerebral voxels.

(2) The gray level range is restricted. Without presegmenta-tion, some codebook vectors would specialize on tissue classes outside the brain. This, however, would lead to a decreased representation of tissue classes within the brain. However, this could be compensated by increasing the total number of codebook vectors applied in the vector quantization procedure.

(3) Voxels inside and outside the brain with a similar gray level representation do not cause problems for the brain tissue segmentation task. If presegmentation was omitted, such voxels would be attributed to the same codebook vector, i.e., they could not be separated from each other. This could only be achieved by considerably increasing the number of codebook vectors in order to obtain a more fine-grained resolution of the gray-level feature space, which, in turn, would increase the computational expense for vector quantization.

The last item in particular justifies the additional effort for presegmentation. In analogy to image registration, there is a wide scope of methods for presegmentation ranging from manual contour tracing to semiautomatic or fully automatic procedures. The latter exist in numerous implementations and are available on a commercial basis. They are frequently based on filter operations or region-growing algorithms (see, e.g., [5,11]). For presegmentation of the data sets included in this chapter, we performed manual contour tracing by human observers.

6.3 Rescaling

In a last preprocessing step, the gray levels of each data set are normalized to the unit interval [0,1]. Let G e IR(m,my,l,n) represent the gray levels of a presegmented, multispectral data set according to the explanations of the previous sections, consisting of n data sets with l images each of size mx x my (n = 4, l = 63, mx = my = 256 for the data applied in this chapter). Grstu thus represents the gray level of a voxel, where u denotes the index of the single data set (e.g., 1 = T1 weighted, 2 = T2 weighted, 3 = proton density weighted, 4 = inversion recovery), t the image, i.e., slice number, and r, s the x- and y-position within the image, respectively. Let gmin(u)

0 0

Post a comment