Iterative Reconstruction

Alternatively, emission tomographic images can be reconstructed with iterative statistical-based reconstruction methods. Instead of using an analytical solution to produce an image of radioactivity distribution from its projection data, iterative reconstruction makes a series of image estimates, compares forward-projections of these image estimates with the measured projection data and refines the image estimates by optimizing an objective function iteratively until a satisfactory result is obtained. Improved reconstruction compared with FBP can be achieved using these approaches, because they allow accurate modeling of statistical fluctuation (noise) in emission and transmission data and other physical processes [34,35]. In addition, appropriate constraints (e.g. nonnega-tivity) and a priori information about the object (e.g. anatomic boundaries) can be incorporated into the reconstruction process so that better image quality can be achieved [36,37].

An iterative reconstruction algorithm consists of three components: (1) a data model which describes the data and acquisition artifacts, (2) an objective function that quantifies the agreement between the image estimate and the measured data, and (3) an optimization algorithm that determines the next image estimate based on the current estimate. The measured data can be modeled by where p = {pj, j = 1, 2,..., M} is a vector containing values of the measured projection data (i.e. sinogram); A = {ki, i = 1, 2,..., N} is a vector containing all the voxel values of the image to be reconstructed; and C = {Cj is a transformation (or system) matrix which defines a mapping (forward-projection) from f to p. The elements of the matrix Cj is the probability that a positron annihilation event that occurred at voxel i is detected at projection ray j. Other physical processes such as nonuniform attenuation and scattered and random effects can be incorporated into the data model in the form of additive noise that corrupted the acquired projection data. Detailed discussion of more complex data models is considered beyond the scope of this chapter. The objective function can include any a priori constraints such as nonnegativity and smoothness. Depending on the assumed number of counts, the objective function can include the Poisson likelihood or the Gaussian likelihood for maximization. The iterative algorithm seeks successive estimates of the image that best match the measured data and it should converge to a solution that maximizes the objective function, with the use of certain termination criteria.

Iterative reconstruction methods based on the maximum-likelihood (ML) have been studied extensively, and the expectation maximization (EM) algorithm [38,39] is the most popular. The ML-EM algorithm seeks to maximize the Poisson likelihood. In practical implementation, the logarithm of the likelihood function is maximized instead for computational reasons:

The EM algorithm updates the image values by

where Xk and Xk+1 are the image estimates obtained from iterations k and k + 1, respectively. The ML-EM algorithm has some special properties:

• The objective function increases monotonically at each iteration, i.e.

• The estimate Ak converges to an image A that maximizes the log-likelihood function for k ^w and

• All successive estimates Ak are nonnegative if the initial estimate is nonnegative.

The major drawback of iterative reconstruction methods, however, has been their excessive computational burden, which has been the main reason that these methods are less practical to implement than FBP. Considerable effort has been directed toward the development of accelerated reconstruction schemes that converge much rapidly. The ordered subsets EM (OS-EM) algorithm proposed by Hudson and Larkin [40] which subdivides the projection data into "ordered subsets" has shown accelerated convergence of at least an order of magnitude as compared to the standard EM algorithm. Practical application of the OS-EM algorithm has demonstrated marked improvement in tumor detection in whole-body PET [41].

A problem with iterative reconstruction algorithms is that they all produce images with larger variance when the number of iterations is increased. Some forms of regularization are required to control the visual quality of the reconstructed image. Regularization can be accomplished by many different ways, including post-reconstruction smoothing, stopping the algorithm after an effective number of reconstruction parameters (number of iterations and subsets for OS-EM), and incorporation of constraints and a priori information as described earlier. However, caution should be taken when using regular-ization because too much regularization can have an adverse effect on the bias of the physiologic parameter estimates obtained from kinetic modeling, which will be described later in this chapter. Nevertheless, with the development of fast algorithm and the improvement in computational hardware, application of iterative reconstruction techniques on a routine basis has become practical.

Was this article helpful?

0 0

Post a comment