Methods of Correction

4.1 Physical Factors

Most of the described distortions due to physical factors can to a great extent be corrected for by appropriate preprocessing of the raw data prior to image reconstruction or directly incorporated into the image reconstruction. For instance, the correction for the problem of nonuniform sampling depends on the image reconstruction method that is used. If a conventional filtered backprojection algorithm is used, each projection needs to be interpolated into equidistant sampling points prior to the filtering step. By doing this, the filtering can be done in the frequency domain using conventional FFTs. On the other hand, interpolation should be avoided if a statistically based image reconstruction is used in order to maintain the statistical nature of the original data. Instead, the nonuniform sampling should be incorporated into the forward-and back-projection steps in the algorithm.

The losses in image information due to sampling and limited spatial resolution caused by the system geometry and the detector systems are factors that typically cannot be corrected for. There are approaches to reduce these losses in the design of a PET system. Axial sampling can be improved by either the use of smaller detector elements, or using a mechanical motion in the axial direction. Both methods have the drawback of increasing the overall scan time in order to maintain noise levels. By increasing the system diameter, resolution losses become more uniform for a given width of the FOV. However, a large system diameter is not always practical or desirable for several reasons: A large-diameter system reduces the detection efficiency; there is an additional loss in resolution caused by the noncolinearity of the annihilation photons; and there is a substantial increase in cost of the system due to more detector material, associated electronics, and data handling hardware. For these reasons, later generation systems tend to have a relatively small system diameter, which on the other hand makes the detector penetration problem more severe. This problem can be resolved to a certain degree by using more shallow detectors, which has the drawback of reduction in detection efficiency. A more desirable approach to resolve this problem is to design a system that has the ability to accurately measure the depth of interaction in the detector, which allows a more accurate localization of the events. Over the years there have been several proposed designs for depth of interaction systems. These include multilayered detection systems [7,8,41] detectors with additional photodetector readouts [30,32] and detectors with depth-dependent signals [28,33,36]. Although all these ideas have been shown to work in principle, their actual implementation in a full system needs to be shown.

Although there are ways to reduce the physical resolution losses by careful subject positioning, there is in general not much one can do about the resolution losses in a given system. Qi et al. [35] has shown that resolution losses due to depth of interaction and other processes can be recovered, if the physical properties of the PET system are properly modeled into the reconstruction algorithm. Like most iterative image reconstruction algorithms, this method is computationally expensive; however, it may be an immediate and practical solution to overcome the problems of resolution losses and distortions in PET.

4.2 Elastic Mapping to Account for Deformation and Intersubject Variability

Because of the many different sources and factors that could cause medical images to have different shapes or distributions/ patterns that cannot be adjusted retrospectively by a rigid body transformation, methods to address the problem vary greatly, depending not only on the originating source but also on the final goal one wants to achieve. However, one type of method that elastically maps (or warps) one set of images to another is of particular interest to scientists in many fields (e.g., mathematics, computer sciences, engineering, and medicine). This type of method, commonly called elastic mapping, can be used to address the elastic deformation and intersubject variability problems discussed in Section 3. In the following, an elastic mapping method is described to illustrate the general characteristics and objective of this type of method.

An elastic mapping method generally consists of two major criteria, regardless of what algorithm is used to achieve them. One is to specify the key features that need to be aligned (or to define the cost function to be minimized); the other is to constrain the changes in relative positions of adjacent pixels that are allowed in the mapping. In our laboratory, we have used the correlation coefficient or sum of square of differences between image values at the same locations in subregions of the two image sets. Figure 6 shows a schematic diagram of the procedure. The entire image volume of one image set is first subdivided into smaller subvolumes. Each subvolume of this image set is moved around to search for a minimum of the cost function (e.g., sum of squares of differences) in matching with the reference image set. The location with the least squares is then considered to be the new location of the center of the subvolume, thus establishing a mapping vector for the center of the subvolume. After this is done for all subvolumes, a set of mapping vectors is obtained for the center pixels of all

FIGURE 6 Schematic diagram illustrating the major steps of a 3D elastic mapping algorithm that can be used to correct for elastic anatomical deformation and for intersubject differences. A set of 3D images (upper left) is first into smaller subvolumes. Each subvolume of this image set is moved around to search for a minimum of the cost function (e.g., sum of squares of differences) in matching with the reference image set (center of figure). The location with the least squares is then considered to be the new location of the center of the subvolume, thus establishing a mapping vector for the center of the subvolume. The mapping vectors for all the pixels can then be obtained as a weighted average of mapping vectors of the neighboring subvolume centers. The sequence of steps can be repeated over and over until some convergence criterion is met. (From Tai et al. [39]. © 1997 IEEE.)

subvolumes. A relaxation factor can also be applied to the magnitude of the mapping vectors to avoid potential overshoot and oscillation problems. The mapping vectors for all the pixels can then be defined as a weighted average of mapping vectors of the neighboring subvolume centers. The weightings should be chosen such that no mapping vectors will intersect or cross one another. The selections of the relaxation factor and the weighting function provide the constraint on the allowable changes in relative positions between adjacent pixels. The sequence of steps just described can be repeated over and over until some convergence criterion is met. The relaxation factor and the weighting function can also vary from iteration to iteration. This elastic mapping method was first proposed by Kosugi etal. [21] for two-dimensional image mapping and was later extended to three-dimensional mapping by Lin and Huang [24], Lin et al. [22,23], and Yang et al. [45]. Figure 7 shows the results of applying this method to coregister MR brain images of five individuals to a common reference image set. The method has also been used by Tai et al. [39] to align thorax images of X-ray CT to PET FDG images of the same subject to aid in accurate localization of lung tumors detected on PET FDG images.

Replied inlet new iinugcs mid rcpciil.

Referont«

Fin J displacement J'tir «it'll voxel.

FIGURE 7 An example of intersubject image coregistration. Images in each row of (a) are MR images (T1 weighted) of four brain cross-sections of an individual. Different appearance of the images in different rows reflects the shape and size differences among the five subjects. After the elastic mapping method of Fig. 6 is applied to match the images of different subjects (using images of subject 1 as the reference set), the results are shown in (b). The images of different subjects are seen to match well in shape and configuration after the elastic coregistration. (Figure is taken from a Ph.D. dissertation by K.P. Lin [27].)

FIGURE 7 An example of intersubject image coregistration. Images in each row of (a) are MR images (T1 weighted) of four brain cross-sections of an individual. Different appearance of the images in different rows reflects the shape and size differences among the five subjects. After the elastic mapping method of Fig. 6 is applied to match the images of different subjects (using images of subject 1 as the reference set), the results are shown in (b). The images of different subjects are seen to match well in shape and configuration after the elastic coregistration. (Figure is taken from a Ph.D. dissertation by K.P. Lin [27].)

Many other elastic mapping methods have also been proposed and used, and have specific features and limitations [3,6,911,13,40, and this Handbook]. For a specific application, one needs to define the requirements first and then look for a method that can satisfy the need.

0 0

Post a comment