Image-order volume rendering techniques are fundamentally different from object-order rendering techniques. Instead of determining how a data sample affects the pixels on the image plane, in an image-order technique, we determine how all the corresponding data samples contribute to each pixel on the image plane.

One of the first image-order volume rendering techniques, which may be called binary ray ca5ting [72], was developed to generate images of surfaces contained within binary volumetric data without the need to explicitly perform boundary detection and hidden-surface removal. For each pixel on the image plane, a ray is cast from that pixel to determine if it intersects the surface contained within the data. For parallel projections, all rays are parallel to the view direction, whereas for perspective projections, rays are cast from the eye point according to the view direction and field of view. If an intersection does occur, shading is performed at the intersection point, and the resulting color is placed in the pixel. In order to determine the first intersection along the ray, a stepping technique is used where the value is determined at regular intervals along the ray until the object is intersected, and a zero-order interpolation technique is used. For a step size d, the ith point sample pi would be taken at a distance i x d along the ray. For a given ray, either all point samples along the ray have a value of 0 (background, where the ray missed the object entirely), or there is some sample pi taken at a distance i x d along the ray, such that all samples pj, j < i, have a value of 0, and sample pi has a value of 1 (object). Point sample pi is then considered to be the first intersection along the ray. In this algorithm, the step size d must be chosen carefully. If d is too large, small features in the data may not be detected. On the other hand, if d is small, the intersection point is more accurately estimated at the cost of higher computation time.

The previous algorithm deals with the display of surfaces within binary data. A more general algorithm can be used to generate surface and composite projections of multivalued data. Instead of traversing a continuous ray and determining the closest data sample for each step with a zero-order interpolation function, a discrete representation of the ray could be traversed. This discrete ray is generated using a 3D Bresenham-like algorithm or a 3D line scan-conversion (voxelization) algorithm [27,30]. As in the previous algorithms, for each pixel in the image plane, the data samples that contribute to it need to be determined. This could be done by casting a ray from each pixel in the direction of the viewing ray. This ray would be discretized (voxelized), and the contribution from each voxel along the path would be considered when producing the final pixel value. This technique is referred to as discrete ray casting [85].

In order to generate a 3D discrete ray using a voxelization algorithm, the 3D discrete topology of 3D paths has to be understood. There are three types of connected paths: 6-connected, 18-connected, and 26-connected, which are based upon the three adjacency relationships between consecutive voxels along the path. An example of these three types of connected paths is given in Fig. 2. Assuming a voxel is represented as a box centered at the grid point, two voxels are said to be 6-connected if they share a face; they are 18-connected if they share a face or an edge; and they are

FIGURE 2 6-, 18-, and 26-connected paths.

26-connected if they share a face, an edge, or a vertex. A 6-connected path is a sequence of voxels, where each pair of consecutive voxels is 6-connected. Similar definitions exist for 18- and 26-connected paths.

In discrete ray casting, a ray is discretized into a 6-, 18-, or 26-connected path, and only voxels along this path are considered when determining final pixel value. If a surface projection is required, the path is traversed until the first voxel that is part of the object is encountered. This voxel is then shaded and the resulting color value is stored in the pixel. Six-connected paths contain almost twice as many voxels as 26-connected paths, so an image created using 26-connected paths would require less computation, but a 26-connected path may miss an intersection that would be detected using a 6-connected path.

To produce a shaded image, the distance to the closest intersection is stored at each pixel in the image, and then this image is passed to a 2D discrete shader, such as those described previously. However, better results can be obtained by performing a 3D discrete shading operation at the intersection point. One 3D discrete shading method, known as normalbased contextual shading [5], can be employed to estimate the normal when zero-order interpolation is used. The normal for a face of a voxel that is on the surface of the object is determined by examining both the orientation of that face and the orientation of the four faces on the surface that are edge connected to that face. Since a face of a voxel can have only six possible orientations, the error in the approximated normal can be significant. More accurate results can be obtained using a technique known as gray-level shading [4,8,23,69,70]. If the intersection occurs at location (x, y, z) in the data, then the gray-level gradient at that location can be approximated with a central difference,

"Connected

FIGURE 2 6-, 18-, and 26-connected paths.

where (Gx, Gy, Gz) is the gradient vector, and Dx, Dy, and Dz are the distances between neighboring samples in the x, y, and z directions, respectively. The gradient vector is used as a normal vector for the shading calculation, and the intensity value obtained from shading is stored in the image. A normal estimation can be performed at point sample pi, and this information, along with the light direction and the distance i x d, can be used to shade pi.

Actually, stopping at the first opaque voxel and shading there is only one of many operations that can be performed on the voxels along a discrete path or continuous ray. Instead, the whole ray could be traversed, storing in the image plane pixel the maximum value encountered along the ray. Figure 3 is a first opaque, or surface, projection of a human brain, which was reconstructed from MRI data, while Fig. 4 is a maximum projection of the same brain. As opposed to a surface projection, a maximum projection is capable of revealing some internal parts of the data, specifically those with high density, such as blood vessles in MRI data. Another option is to display the sum (simulating X-rays) or average of all values along the ray (see Fig. 5). More complex techniques, described later, may involve defining an opacity and color for each scalar value, and then accumulating intensity along the ray according to some compositing function, capturing and revealing 3D structure information and 3D internal features (see Fig. 6).

The two previous rendering techniques, binary ray casting and discrete ray casting, use zero-order interpolation in order to define scalar value at any location in R3. One advantage to using zero-order interpolation is simplicity and speed. One disadvantage, though, is the aliasing effects in the image. Higher-order interpolation functions can be used to create a more accurate image, but generally at the cost of algorithm complexity and computation time. The next three algorithms described in this section all use higher-order interpolation functions.

When creating a composite projection of a data set, there are two important parameters: the color at a sample point, and the opacity at that location. An image-order volume rendering algorithm developed by Levoy [39] states that given an array of data samples S, two new arrays Sc and Sa, which define the

FIGURE 3 A surface projection of an MRI scan of a brain. See also Plate 127.

An X-ray projection of an MRI scan of a brain. See also

FIGURE 3 A surface projection of an MRI scan of a brain. See also Plate 127.

Was this article helpful?

## Post a comment