Volume Rendering

One of the most versatile and powerful image display and manipulation techniques is volume rendering [6,11,31,57]. Volume rendering techniques based on ray-casting algorithms have generally become the method of choice for visualization of 3D biomedical volume images [23,54]. These methods provide direct visualization of the volume images without the need for prior surface or object segmentation, preserving the values and context of the original image data. Volume rendering techniques allow for the application of various different rendering algorithms during the ray-casting process. Surface extraction is not necessary, as the entire volume image is used in this rendering process, maintaining the original volume image data. This provides the capability to section the rendered image and visualize the actual image data in the volume image, and to make voxel value-based measurements for the rendered image. The rendered surface can be dynamically determined by changing the ray-casting and surface recognition conditions during the rendering process. However, 3D biomedical volume image data sets are characteristically large, taxing the computation abilities of volume-rendering techniques and the systems on which they are implemented. This is particularly true when the rendering process must preserve resolution of detail with sufficient fidelity for appropriate visualization of the displayed structures. Given the discrete voxel-based nature of the volume image, there is no direct connection to other geometric objects, which may be desired for inclusion in the rendering or for output of the rendered structure to other devices.

Volume rendering display techniques have the characteristic of being able to display surfaces with shading and other parts of the volume simultaneously. An important advantage is to display data directly from the gray-scale volume. The selection of the data that will appear on the screen is done during the projection of the voxels (sometimes called ray-casting). A function of different attributes of the voxels, such as their absolute density, gradient values, and/or spatial coordinates, can be invoked during the projection process to produce "on-the-fly" segmented surfaces, cutting planes anywhere in the volume, and/or selected degrees of transparency/opacity within the volume. Volume set operations (union, intersection, difference of volume) can also be invoked during the projection process.

The most common camera model for volume rendering by ray casting consists of a source point (called eye), a focal point (where the eye is looking), and a matrix of pixels (the screen) [57]. The visible object to display (called scene) is in front of the camera within a truncated volume called the viewing pyramid, as shown in Fig. 6. The purpose of a ray-tracing model is to define the geometry of the rays cast through the scene. To connect the source point to the scene, for each pixel of the screen, a ray is defined as a straight line from the source point passing through the pixel. To generate the picture, the pixel values are assigned appropriate intensities "sampled" by the rays passing everywhere through the scene (volume of data). For instance, for shaded surface display, the pixel values are computed according to light models (intensities and orientation of light source(s), reflections, textures, surface orientations, etc.) where the rays have intersected the scene.

The purpose of a ray-tracing model is to define the geometry of the rays cast through the scene. To connect the source point to the scene, for each pixel of the screen, a ray is defined as a straight line from the source point passing through the pixel, as shown in Fig. 7. To generate the picture, the pixel values are assigned appropriate intensities "sampled" by the rays passing everywhere through the scene (volume of data). For instance, for shaded surface display, the pixel values are computed according to light models (intensity and orientation of light

<man| | MROgjg | •^OmmandJ ||-:.Mnitol Vi MoiHl VW. | 7* Modi V* | ?< ModglM« | ^ <»PM

FIGURE 5 Shaded surface displays of segmented skull and brain (top left and right, respectively) with underlying polygonal representations (bottom).

<man| | MROgjg | •^OmmandJ ||-:.Mnitol Vi MoiHl VW. | 7* Modi V* | ?< ModglM« | ^ <»PM

FIGURE 5 Shaded surface displays of segmented skull and brain (top left and right, respectively) with underlying polygonal representations (bottom).

source(s), reflections, textures, surface orientations, etc.) where the rays have intersected the scene.

There are two general classes of volume display: transmission and reflection [57]. An example of a transmitted image is an X-ray radiograph; an example of a reflected image is a photograph. For the radiograph, the film is located behind the scene and only the rays transmitted through and filtered by the objects in the scene are recorded on the film. For the photograph, the film is located in front of the scene, so the film records the light reflected by the objects in the scene. For transmission-oriented displays, there is no surface identification involved. A ray passes totally through the volume and the pixel value is computed as an integrated function. There are three important display subtypes in this family: brightest voxel, weighted summation, and surface projection (projection of a thick surface layer). For all reflection display types, voxel density values are used to specify surfaces within the volume image. For example, if the value of a voxel intersected by a ray is between a specified maximum and minimum (a threshold), this voxel is defined as being on the surface. Three types of functions may be specified to compute the shading — depth shading, depth gradient shading, and real gradient shading.

In depth shading, the value of the display pixel is simply a function of depth (close objects appear brighter than far objects), that is, the distance between the screen and the intersected voxel on the surface. It is a common method of conveying 3D perspective. In depth gradient shading, depth shading is used, but postprocessing is performed on the resulting image to enhance contrast. A sham normal function is computed for the current pixel depth and the neighboring pixel depths. Rather than computing a normal vector within the scene space, a sham vector is computed on the screen space where the z component of the normal is always equal to 1. In practice, the depth gradient model often produces artifacts due

6 rwiqtitwrhWKf nekghbortiwd

FIGURE 7 Diagram of ray-tracing geometry and spatial gradient shading using 6-voxel or 26-voxel neighborhoods.

6 rwiqtitwrhWKf nekghbortiwd

FIGURE 7 Diagram of ray-tracing geometry and spatial gradient shading using 6-voxel or 26-voxel neighborhoods.

to discontinuities and noise of the gradient approximation. In order to reduce these artifacts, image smoothing is performed. In real gradient shading, as shown in Fig. 7, an improved computation of the normal vector associated with the pixel is used to implement the diffuse reflection lighting model without producing artifacts [57]. A natural way to compute a

FIGURE 6 Camera models depicting viewing pyramid for divergent rays (left) and parallel rays (right) commonly used in volume rendering methods.

FIGURE 6 Camera models depicting viewing pyramid for divergent rays (left) and parallel rays (right) commonly used in volume rendering methods.

normal vector at a point (voxel) is to compute the gradient associated with the point. In a 3D volume image, this gradient can be computed using a 6, 18, or 26-neighborhood about the voxel, as illustrated in Fig. 7. A rendering method known as volume compositing [6,11,31] is often used on medical anatomic scan data, wherein each voxel is assigned fractional opacity based on its tissue or element type (e.g., bone, muscle, blood, air) and these partitioned voxel densities contribute to homologous tissue or object surfaces deposited on the screen during the ray casting process.

Full gradient volume rendering methods can incorporate transparency in order to see two different structures in the display, one through another one [54]. The basic principle is to define two structures with two segmentation functions. To accomplish this, a double threshold on the voxel density values is used. The opaque and transparent structures are specified according to the thresholds used. A transparency coefficient is also specified. The transparent effect for each pixel on the screen is computed according to a weighted function of the reflection due to the transparent structure, the light transmission through that structure, and the reflection due to the opaque structure. The general model for providing transparent renderings is illustrated in Fig. 8.

Examples of several capabilities for interactive transformation and volume rendering of 3D biomedical image data are illustrated in Figs 9 and 10.

0 0

Post a comment