Illustrative Visualization

This class of visualization is the earliest one to develop and includes the first three generations, with their ID, 2D and 3D visualization displays. The concepts developed here are also applied in the other two phenotypes of visualization.

2.1 First Generation Systems Real-time Patient Monitoring

Prior to the arrival of image-based information in radiology, the most basic forms of visualization in biomedicine were the one-dimensional (ID) waveforms that one would see on such devices as ECG monitors. Although elementary, this form of visualization is a very powerful tool for conveying the physiological state of the subject during a clinical intervention. Depending on the time scale of these waveforms, one would be able to understand both the present state of the subject and the trend condition. This information may be difficult to represent in any other form other than simple visual aid.

Subsequently, more advanced forms of these ID displays were generated to convey the combined physiological state using different signals such as ECG, blood pressure, and respiratory waveforms. These various signals were appropriately combined and presented in a form that would help clinicians rapidly observe potential problems during a clinical intervention. Thus, the "multimodality" of information helped clinicians understand the "functional" state of the physiology of the subject. These early developments already indicated the potential benefit of visualization in patient care. Anesthesiologists who operate various instruments for delivering and maintaining proper respiratory and hemodynamic state during an intraoperative procedure may need to read various displays to be careful to avoid human errors. Their task is often described to be analogous to that of a pilot inside a cockpit. As heads-up display visualization techniques helped revolutionize the organization of the cockpit, the use of highend visualization became common for even the simplest forms of biomedical signals. Thus, the term cockpit visualization technology became popular for describing the impact of visualization in medicine.

2.2 Second Generation Systems

Two-dimensional (2D) image processing techniques and displays formed the second generation systems. Some of the earliest 2D visualization tools were image processing techniques for enhancing image features that otherwise may have been ignored. Feature extraction techniques, expert systems, and neural networks applications were developed along with second generation visualization systems.

Interpolation

In medical images, the number of pixels might vary depending on the modality of imaging, and it is usually in the range of 128 x 128 to 512 x 512. The resolution of graphic displays is usually high, above 72 pixels/inch, making these images appear relatively small. Suitable interpolation techniques are required to enlarge these images to a proper display size. A common interpolation technique, bilinear interpolation, refers to linear interpolation along the horizontal and vertical direction. The pixel value at any display point is computed based on a weighted sum of pixel values at the corner locations of the smallest rectangular cell in the image that surrounds the point. The weighting factor is the ratio of the area the diagonally opposite corner forms with the considered display point to the area of the rectangular cell formed by the pixel image. Although bilinear interpolation may provide satisfactory results for many applications, more elaborate interpolation techniques may also be required. Interpolation techniques are discussed in detail in Chapter 25.

2D Contours and Deformable Models

Manipulation of the entire 2D image appeared to be a cumbersome approach when the feature of interest could be represented as contour lines delineating structures of interest. Besides, such contours could provide quantitative information such as area and perimeter, and they could also be used to build 3D models. Considering such benefits, both manual (supervised) and automatic approaches were developed. Earlier automatic contour extraction techniques suffered setbacks due to the insufficient quality of the image. Later, deformable models were developed to preserve the continuity of the contour and its topology. The user would provide an initial simple contour line that served as an initial reference. The deformable model would then shrink or expand the contour to minimize a cost function associated with its shape evaluated at each iterative step as the contour progressively approached the boundary of interest. By defining appropriate penalty values to prevent irregularities in the contour, the continuity around poor image boundaries was preserved. This approach, also called snakes or active contours, created considerable technical interest in the field. Deformable models are discussed in Chapter 8, and related topics are addressed in Chapters 9, 10, and 16.

Contour Models

In early systems, contours from a stack of serial slices could be arranged to display a topographic view of the 3D form representing the boundary of structures. Later, "contour stitching" techniques were developed to sew the contours and provide a 3D surface model view of the contour stacks. Although simple in appearance, these techniques required significant attention, especially when the contour shapes changed considerably and led to aliasing problems in the rendered surface. A

case of particular interest was the Y branch frequently encountered in vascular or airway trees. In these cases, one contour in a slice had to be stitched to two contours in the adjoining slice to produce the Y branch. Several approaches that could solve the problem in special cases were developed, but they did not lead to a general solution. Subsequently, isosurfaces where developed and used for this purpose, as described later. Because of the drawback of dealing with a large number of triangles to represent the branch, other approaches based on an implicit analytical representation were investigated and used in a limited number of applications. New techniques under investigation show promise, especially model-based approaches where a model representing the Y branch can be appropriately parameterized to adapt to points in the contour lines.

2D Texture Mapping and Parametrizing Images

Texture mapping is a concept introduced in computer graphics for providing high visual realism in a scene [28]. Painting the elements of a drawing with realistic effects each time they appear in the scene may be an unwarranted intensive task, especially when the purpose is to provide only a visual context, such as the background field in a game. In those cases, an image piece representing the elements, such as its photograph, could be used to create the illusion that the elements appear as if they were drawn into the scene. The simplicity and popularity of this technique enabled both graphics software developers and graphics hardware manufacturers to use it widely. The technology of texture mapping rapidly advanced and became inexpensive. In the scientific visualization field texture mapping contributes to realism and speed, but more importantly it provides a geometric representation for the image, separating the spatial information from its image-pixel fragments. This substantially simplifies the tasks involved in visualization problems. By solving the problem in its geometric formulation, which in many cases would be more straightforward, and letting the texture mapping technique take care of the pixel association with the geometry, more ingenious visualization systems could be built.

2.3 Third Generation Systems

With the arrival of 3D images in the biomedical field, researchers developed various methods to display volume information [17]. The effectiveness of a technique depends primarily on the source of the image. Probably the most important factor in the development of volume visualization is the fact that the data had one dimension more than the computer display. Thus, in some sense, every technique ultimately had to project the 3D information to form a 2D image, a process where information could be potentially lost or occluded. Although stereo display may have eliminated some of these problems, the fundamental problem of presenting a 3D volume of information in a form that the user can quickly interpret still remains an elusive visualization problem. In this context, it may be justified to say that the routine use of clinical visualization is waiting for a smart visualization system that can quickly determine the main interest of the user and present this information in a convenient manner. Unlike ID and 2D systems, which appear to have gained quick clinical entry, 3D systems are still used primarily in clinical research programs rather than routine clinical applications. The cost of 3D visualization systems that can be considered clinically useful is also relatively high, because of demanding volumetric computations that require very fast systems.

Suriace Visualization

Although presenting a 3D-volume image is a fairly complex problem, there are other ways of displaying or extracting geometrical information that have been well accepted for certain applications. The approach, which is similar to isocontour lines in topographic data, extends this concept to create a 3D surface to characterize 3D image data. The technique came to be known as isosurface extraction and was proposed by Marc Levoy and Bill Lorensen [19]. The method works very successfully for CT volume image data where the high signal-to-noise ratio allows effective classification of constituent structures.

Isosurface Extraction ("Marching Cubes") Volumetric images consist of a stack of 2D images and can be considered as a 3D matrix of image data points. The smallest fragment of the image is called a voxel, in analogy to the concept of pixel in a 2D image. The surface is extracted using a thresholding algorithm for each cube of the lattice, marching through the entire volume. In each cube, each pair of connected corners is examined for a threshold-crossover point based on linear interpolation. These points along each of the edges are linked to form the isosurface on each cube, and the process is repeated for the rest of the volume. Special interpretation is required to handle cases that correspond to multiple surfaces within the cube. The surfaces are usually represented with triangles. A detailed description of this method and its recent advances are presented in Chapter 44.

The advantage of this method is its fairly detailed surface representation for objects of interest, as long as the objects are easily separable in the data. However, its computational load is high, and each time a new threshold value is selected the generation of the new surface may cause delays. The number of triangles produced by the method in a typical set of volume image data is very large, typically on the order of tens of thousands. Thus, displaying them all can be an intensive graphic task. Adaptive surface extraction techniques were later developed to address this problem using an approach that coalesces coplanar triangles to be represented by a larger polygon. This can improve performance substantially, as the number of vertices that need to be processed in the transformation pipeline is significantly reduced. Reduction can be obtained with Delaunay triangulation, also known as thin-plate techniques, where coalescing can be extended to include triangles that are approximately coplanar, within a given tolerance. This can reduce the triangle population at the expense of a negligible drop in quality.

Deformable Surfaces, Balloons, Shrink-Wrap Surfaces Surface extraction proves to be very effective when the signal-to-noise ratio of the data is high and structures are well segmented. However, extraction results could become unpredictable when the data are noisy or when structures cannot be segmented well, as is often the case with some MR images. Following the approach used to solve similar problems in 2D images using deformable contours [46], elastic surface approaches were proposed to solve the problem in 3D data. These surfaces are sometimes called balloons, for their expanding properties, or shrink-wrapping surfaces with elastic properties, or, in general, deformable surfaces. Like snakes, these techniques usually tend to be computationally intensive because of their iterative steps.

"Statistical" Surfaces Recent approaches that attempt to produce efficient results for noisy data are "statistical" surfaces that employ space partitioning techniques based on local statistical measures to produce a mean estimated surface within a given error deviation. This technique may not preserve the topology connectivity that deformable techniques could provide.

Wavelet Surfaces As discussed earlier, one of the problems of surface representation is the number of triangles used to represent the surface. Recently, wavelet techniques, which have inherently a multiresolution approach, have been applied to describe surfaces. One major advantage of this approach is that one can decide the desired resolution at display time; thus, during periods of interaction a low-resolution surface could be displayed, and when the interaction ceases the higher resolution display of the surface can be generated.

Volume Visualization

The inherent limitation of the surface extraction method is that it represents a specific threshold value in the data and becomes restrictive or selective. Also, occasionally, false surface fragments may be produced because of the interpolation involved. Volumetric visualization methods overcome these limitations and could help visualize as much information as possible from the 3D volume image, without being restrictive. This chapter presents a brief survey of issues in volume visualization. More detailed descriptions of these methods are presented in Chapters 42 and 43.

Maximum Intensity Projection The earliest volume visualization method is known as maximum intensity projection (MIP). As the name suggests, in this approach, the maximum intensity value in the volume data is projected on the viewing plane along each ray of the projection. This technique is particularly useful for displaying vascular structures acquired using angiographic methods.

Volume Rendering The "ray-casting" approach was further explored to produce new visualization concepts in medicine that shared a common principle but approached the solution differently. Instead of choosing the maximum value along each ray, the general idea was to accumulate all data points along the ray with appropriate weights and to produce an aggregate value that is projected on the viewing plane. This led to the development of various volume rendering techniques [23-27].

Various accumulation models were employed to produce different results. For instance, a simple summation of all the points along the ray can produce line-integral projections similar to X-ray images. The simple summation operation is a less intensive operation than other complex accumulation operations. This may be useful for simulating X-ray type images, but because the image will be dominated by structures that have maximum data values irrespective of their spatial distance from the viewing plane, this technique is less desirable to produce a 3D effect in the rendered image. The most popular technique turned out to be one that used a "blending" operation similar to the one used for antialiasing during scan conversion or scene composition [22]. The blending operation corresponds to a weighted mean between the data point and the background value. The weighting factor could be given a physical meaning of "opacity" of the data point, from which the transparency of the data point, i.e., 1 minus the opacity, can be computed. For generating the rendered image, each ray is traversed along the volume, from back to front with respect to the viewing plane, starting with initial background value zero. Data points are accumulated as the sum of the data value times its opacity, and the background times its transparency. The sum gives an effective blended value for the data points, which are treated as semitransparent. An interesting aspect of this technique is that hardware graphics pipelines in advanced graphics accelerated systems had blending functions that supported this operation on a per pixel basis, used for scene composition and antialiasing. The weighting factor became equivalent to the "alpha coefficient," the fourth component in the four-components-per-pixel representations (red, green, blue, alpha) of high-end graphic workstations.

Trilinear Interpolation During ray traversal through the volume, the rays need not pass through exact data points, except when the rays are parallel to the three orthogonal coordinate axes of the volume. In all other orientations the ray will pass through between data points. In this case, it becomes necessary to estimate the data point values along the ray at equal distances. One simple way would be to use the "nearest-neighbor" value, which can produce fast results. However, this gives rise to poor resolution of the feature, and the nearest neighbor can flip under even a simple rotation, causing undesirable artifacts. Thus, a smooth interpolation along the ray becomes very important, and the quality of rendering substantially improves with a good interpolation method. One popular and simple interpolation technique is the trilinear interpolation approach, similar to the well-known bilinear approach. The data value at any given location inside the image lattice is computed using the smallest cubic cell that contains the point. The interpolated value is a linear weighted sum of the contributions from all eight corners of this cell. The weighting factor for each corner is computed as the ratio of the volume of the cube whose diagonal axis is formed by the given point and the diagonally opposite corner to the volume of the cube formed by the cell.

The volume rendering operation requires a fairly large number of data samples, to prevent the blending from being dominated by a few data values, which would produce a strong aliasing effect. When the number of sample points is low, the results may have poor quality. Also, in some applications, the number of rays needed to produce the desired rendering may be greater than the data points can support. In such cases, interpolation becomes the single major factor that can determine the quality of the rendered image. More elaborate interpolation techniques can take into account not just eight surrounding data points but the 26 neighboring cells to compute a cubic polynomial interpolation. Such interpolation techniques give very good results, but the additional speed penalty may be quite high. When the original volume of data is fairly large (>256x256x64), trilinear interpolation may provide good results to fulfill many visualization purposes.

Lighting and Shading One of the important benefits of surface rendering compared to volume rendering is the lighting and shading effects that can improve visual cues in the 2D displayed image. This is figuratively called 2.5D image. Thus, the lack of lighting in the early volume rendering models and in MIP represented a notable drawback. Lighting calculations were introduced in the blending equations introducing a weighting factor that represented the local gradient in the scalar 3D field data. The gradient vector was estimated using the gradient along the three principal axes of the volume or the 26 neighbors.

Lighting effects are particularly useful with CT image data where the surface features in the data are well pronounced because of the limited number of features that CT imaging resolves. However, with MR images, lighting does not always produce better results. Its contributions depend on the data contrast and noise. Noise in the data can affect gradient calculation much more strongly than the linear interpolation estimations.

Transfer Functions The hidden difficulty of volume rendering resides in the weighting factor of the accumulation equations. The weighting factor or the opacity value can be assigned for different data points in the volume to enhance or suppress their influence in the rendered image. The transfer function or the lookup table for the weighting factor serve this purpose. Since the data points are usually represented as discrete data of 8, 12, or 16 bits, such table sizes are not very large. However, for floating-point representation of the data, a transfer function (a piecewise linear or a continuous polynomial representation) can be used.

Although the transfer function provides considerable flexibility for controlling transparency or opacity, it may be sometimes very difficult to enhance the feature of interest that the user wants to visualize. A small change in the transfer function may cause large differences in the final image because many interpolated samples are rendered along each ray, and this large accumulation can produce oversaturation or under-representation in the computations. Many automatic approaches to compute the transfer functions have been proposed, but none have become popular enough. Hardware-accelerated volume rendering [29-32] and dramatic improvements in the speed of computing and graphic systems led to the development of interactive approaches where the user could quickly see the result of the selected transfer function to determine to the desired one. However, a highly intelligent transfer function technique that can operate with minimal user interaction is critical for a wider clinical acceptance of volume rendering. Alternatively, more versatile rendering techniques that do not depend on sensitive input parameters may also be developed.

Shell Rendering Many techniques were investigated soon after the shortcomings of volume rendering and surface rendering techniques were realized. Notable among them is a shell rendering technique [20] where the general principle is to combine the strengths of surface rendering and volume rendering and suppress their weakness. Surface rendering is very selective in extracting particular structures from the volume data. In cases where the anatomical structures of interest cannot be extracted with a unique threshold, surface rendering may be difficult to use. Volume rendering blends the data over a range suitably weighted by a transfer function. However, in its original form, it does not take into account the spatial connectivity between various structures in the data, thus making it sometimes difficult to select a particular structure of interest. Shell rendering combines spatial connectivity information addressed by surface rendering with voxel level blending of data provided by volume rendering. Thus, if surface rendering is considered as a "hard-shell" based on its specific threshold values, the shell rendering can be thought of as a "soft-shell" based on its fuzzy threshold.

Volume Encoding (Octrees) During the development of various rendering techniques, encoding of volume data emerged as a research topic. When they became first available, sets of volume data were relatively large in comparison to the limited disk storage capacity of the time. Representation and processing of volume data required a great deal of memory and computing power. Octree encoding of the volume took advantage of clusters of data points that could be grouped. Appropriate grouping of these data points could result in substantial reduction in storage and yield high speed in volume rendering.

A cluster of cells partitioned by octree encoding can be processed in one step instead of the several steps that may otherwise be required.

A more versatile approach would be a 3D wavelet-based approach, which could help maintain multilevel resolution of the volume. It could provide image compression and improve the speed of rendering during interaction by using low resolution.

Texture Mapping for 3D Images The concept of 2D texture mapping was extended to 3D images. However, unlike 2D texture mapping, the 3D texture mapping technique does not produce a 3D rendered image directly from a texture map. It only provides a fundamental but powerful capability called multiplanar reformatting (MPR) that can be used for any 3D rendering application. MPR enables one to extract any arbitrary plane of image data from the 3D volume image that has been defined as a 3D texture. Applications can use this capability to produce a 3D-rendered image. For instance, if the volume is sliced into stacks of slices parallel to the viewing plane that are then blended back-to-front (with respect to the viewing plane), the volume-rendered image can be obtained. This process is also known as depth composition. The alpha component and the hardware-accelerated, per-pixel blending operation can be used for rendering.

Was this article helpful?

0 0

Post a comment