Info

The image edges can be found by locating pixels where the

FIGURE 8 Results of Laplacian and Laplacian of Gaussian (LoG) applied to the original image shown in Fig. 7A. (A) 3 x 3 Laplacian image, (B) result of a 7 x 7 Gaussian smoothing followed by a 7 x 7 Laplacian, (C) zero-crossings of the Laplacian image A, (D) zero-crossings of the LoG image B.

Laplacian makes a transition through zero (zero crossings). Figure 8A shows a result of a 3x 3 Laplacian applied to the image in Fig. 7A. The zero crossings of the Laplacian are shown in Fig. 8C.

All edge detection methods that are based on a gradient or Laplacian are very sensitive to noise. In some applications, noise effects can be reduced by smoothing the image before applying an edge operation. Marr and Hildreth [72] proposed smoothing the image with a Gaussian filter before application of the Laplacian (this operation is called Laplacian of Gaussian, LoG). Figure 8B shows the result of a 7 x 7 Gaussian followed by a 7x 7 Laplacian applied to the original image in Fig. 7A. The zero crossings of the LoG operator are shown on Fig. 8D. The advantage of LoG operator compared to a Laplacian is that the edges of the blood vessels are smoother and better outlined. However, in both Figs 8C and D, the nonsignificant edges are detected in regions of almost constant gray level. To solve this problem, the information about the edges obtained using first and second derivatives can be combined [107]. This approach was used by Goshtasby and Turner [38] to extract the ventricular chambers in flow-enhanced MR cardiac images. They used a combination of zero crossings of the LoG operator and local maximum of the gradient magnitude image, followed by the curve-fitting algorithm.

The Marr-Hildreth operator was used by Bomans et al. [12] to segment the MR images of the head. In a study of coronary arteriograms, Sun et al. [110] used a directional low-pass filter to average image intensities in the direction parallel to the vessel border. Other edge-finding algorithms can be found in Refs. [24,30,36,96].

6 Multispectral Techniques

Most traditional segmentation techniques use images that represent only one type of data, for example MR or CT. If different images of the same object are acquired using several imaging modalities, such as CT, MR, PET, ultrasound, or collecting images over time, they can provide different features of the objects, and this spectrum of features can be used for segmentation. The segmentation techniques based on integration of information from several images are called multispectral or multimodal [20,22,29,90,103,118].

6.1 Segmentation Using Multiple Images

Acquired by Different Imaging Techniques

In the case of a single image, pixel classification is based on a single feature (gray level), and segmentation is done in one-dimensional (single-channel) feature space. In multispectral images, each pixel is characterized by a set of features and the segmentation can be performed in multidimensional (multichannel) feature space using clustering algorithms. For example, if the MR images were collected using T1, T2, and a proton-density imaging protocol, the relative multispectral data set for each tissue class result in the formation of tissue clusters in three-dimensional feature space. The simplest approach is to construct a 3D scatter plot, where the three axes represent pixel intensities for T1, T2, and proton density images. The clusters on such a scatter plot can be analyzed and the segmentation rules for different tissues can be determined using automatic or semiautomatic methods [13,19].

There are many segmentation techniques used in multi-modality images. Some of them are fc-nearest neighbors (kNN) [19,55,76], fc-means [111,118], fuzzy c-means [12,40], artificial networks algorithms [19,89], expectation/maximization [31,58,125], and adaptive template moderated spatially varying statistical classification techniques [122]. All multi-spectral techniques require images to be properly registered. In order to reduce noise and increase the performance of the segmentation techniques, images can be smoothed. Excellent results have been obtained with adaptive filtering [20], such as Bayesian processing, nonlinear anisotropic diffusion filtering, and filtering with wavelet transforms [32,49,50,103,124,130].

To illustrate the advantages of using multispectral segmentation, we show in Fig. 9 the results of adaptive segmentation by Wells et al. [125] applied to dual-echo (T2-weighted and proton-density weighted) images of the brain. The adaptive segmentation technique is based on the expectation/maximization algorithm (EM) [26a] and uses knowledge of tissue properties and intensity inhomogeneities to correct and segment MR images. The technique has been very effective in segmenting brain tissue in a study including more than 1000 brain scans [125]. Figures 9A and B present the original T2 and proton-density images, respectively. Both images were obtained from a healthy volunteer on a 1.5-T MR scanner.

FIGURE 9 The results of adaptive segmentation applied to dual-echo images of the brain. (A) Original T2-weighted image, (B) original proton-density weighted image, (C) result of conventional statistical classification, (D) result of EM segmentation. The tissue classes are represented by colors: blue, CSF; green, white matter; gray, gray matter; pink, fat; black, background. See also Plate 4. (Courtesy of Dr. W. M. Wells III, Surgical Planning Lab, Department of Radiology, Brigham and Women's Hospital, Boston.)

FIGURE 9 The results of adaptive segmentation applied to dual-echo images of the brain. (A) Original T2-weighted image, (B) original proton-density weighted image, (C) result of conventional statistical classification, (D) result of EM segmentation. The tissue classes are represented by colors: blue, CSF; green, white matter; gray, gray matter; pink, fat; black, background. See also Plate 4. (Courtesy of Dr. W. M. Wells III, Surgical Planning Lab, Department of Radiology, Brigham and Women's Hospital, Boston.)

Figure 9C shows a result of conventional statistical classification, using nonparametric intensity models derived from images of the same type from a different individual. The segmentation is too heavy on white matter and shows asymmetry in the gray matter thickness due to intrascan inhomogeneities. Considerable improvement is evident in Fig. 9D, which shows the result of EM segmentation after convergence at 19 iterations.

Adaptive segmentation [125] is a generalization of standard intensity-based classification that, in addition to the usual tissue class conditional intensity models, incorporates models of the intra- and interscan intensity inhomogeneities that usually occur in MR images. The EM algorithm is an iterative algorithm that alternates between conventional statistical tissue classification (the "E" step) and the reestimation of a correction for the unknown intensity inhomogeneity (the "M" step).

The EM approach may be motivated by the following observations. If an improved intensity correction is available, it is a simple matter to apply it to the intensity data and obtain an improved classification. Similarly, if an improved classification is available, it can be used to derive an improved intensity correction, for example, by predicting image intensities based on tissue class, comparing the predicted intensities with the observed intensities, and smoothing. Eventually, the process converges, typically in less than 20 iterations, and yields a classification and an intensity correction.

In recent work, the algorithm has been extended in a number of directions. A spline-based modeling of the intensity artifacts associated with surface coils have been described by Gilles et al. [34]. The addition of an "unknown" tissue class and other refinements have been described by Guillemaud and Brady [39]. Also, Markov models of tissue homogeneity have been added to the formalism in order to reduce the thermal noise that is usually apparent in MR imagery. Held et al. [42] used the method of iterated conditional modes to solve the resulting combinatorial optimization problem, while Kapur [59] used mean field methods to solve a related continuous optimization problem.

6.2 Segmentation Using Multiple Images Acquired over Time

Multispectral images can also be acquired as a sequence of images, in which intensities of certain objects change with time, but the anatomical structures remain stationary. One example of such sequence is a CT image series generated after intravenous injection of a contrast medium that is carried to an organ of interest. Such an image sequence has constant morphology of the imaged structure, but regional intensity values may change from one image to the next, depending upon the local pharmacokinetics of the contrast agent.

The most popular segmentation technique that employs both intensity and temporal information contained in image sequences, is the parametric analysis technique [44,45,79a, 89a]. In this technique, for each pixel or region of interest, the intensity is plotted versus time. Next, the plots are analyzed, with the assumption that the curves have similar time characteristics. Certain parameters are chosen, such as maximum or a minimum intensity, distance between maximum and minimum, or time of occurrence of maximum or minimum. The appropriate set of parameters depends on the functional characteristics of the object being studied. Then, an image is calculated for each of the chosen parameters. In such images the value of each pixel is made equal to the value of the parameter at that point. Therefore, the method is called parametric imaging. The disadvantage of the method of parametric analysis is that it assumes that all pixel intensity sequence plots have the same general pattern across the image. In fact, however, many images have pixels or regions of pixels that do not share the same characteristics in the time domain and, therefore, will have dissimilar dynamic intensity plots.

An interesting application of the parametric mapping technique to the 3D segmentation of multiple sclerosis lesions on series of MR images was proposed by Gerig et al. [33]. Temporal images were acquired in intervals of 1, 2, or 4 weeks during a period of 1 year. The parameters chosen for parametric maps were based on lesion characteristics, such as lesion intensity variance, time of appearance, and time of disappearance. The 3D maps displayed patterns of lesions that show similar temporal dynamics.

Another technique for temporal segmentation was introduced by Rogowska [91]. The correlation mapping (also called similarity mapping) technique identifies regions (or objects) according to their temporal similarity or dissimilarity with respect to a reference time-intensity curve obtained from a reference region of interest (ROI). Assume that we have a sequence of N spatially registered temporal images of stationary structures. The similarity map NCOR^ based on normalized correlation is defined for each pixel (i, j) as

where Ay [n] is the time sequence of image intensity values for the consecutive N images: Aj [1], Aj [2],..., Aj [N], (i — 1,2,..., I, j — 1,2,..., J, n — 1, 2,..., N; I is the number of image rows, J is the number of image columns), R[n] is the reference sequence of mean intensity values from a selected reference ROI, pA is the mean value of the time sequence for pixel (i, j), and pR is the mean value of the reference sequence.

Pixels in the resulting similarity map, whose temporal sequence is similar to the reference, have high correlation values and are bright, while those with low correlation values are dark. Therefore, similarity mapping segments structures in an image sequence based on their temporal responses rather than spatial properties. In addition, similarity maps can be displayed in pseudocolor or color-coded and superimposed on one image. Figure 10 shows an application of correlation mapping technique to the temporal sequence of images acquired from a patient with a brain tumor after a bolus injection of contrast agent (Gd-DTPA) on a 1T MR scanner. The first image in a sequence of 60 MR images with the reference region of interest in the tumor area and a normal ROI is shown in Fig. 10A. Figure 10B plots the average intensities of the reference and normal ROIs. The correlation map is displayed with a pseudocolor lookup table in Fig. 10C.

The technique of correlation mapping has found numerous applications. Some of them are included in Refs [92,93]. Other investigators have adopted this technique in brain activation studies [7], segmentation of breast tumors [71], and renal pathologies [108].

A modification of correlation mapping technique, called delay mapping, is also used to segment temporal sequences of images. It segments an image into regions with different time lags, which are calculated with respect to the reference [94].

Parametric maps, similarity maps, and delay maps — all are

Minu iniffitri

0 0

Post a comment