Object Order Techniques

Object-order techniques involve mapping data samples onto the image plane. One way to accomplish a projection of a surface contained within the volume is to loop through data samples, projecting each sample that is part of the object onto the image plane. For this algorithm, data samples are binary voxels, with a value of 0 indicating background and a value of 1 indicating the object. Also, data samples are on a grid with uniform spacing in all three directions.

If an image is produced by projecting all voxels with a value of 1 to the image plane in an arbitrary order, we are not guaranteed a correct image. If two voxels project to the same pixel on the image plane, the one that was projected later will prevail, even if it is farther from the image plane than the earlier projected voxel. This problem can be solved by traversing data samples in a back-to-front order. For this algorithm, the strict definition of back-to-front can be relaxed to require that if two voxels project to the same pixel on the image plane, the first processed voxel must be farther away from the image plane than the second one. This can be accomplished by traversing the data plane-by-plane, and row-by-row inside each plane. For arbitrary orientations of data in relation to the image plane, some axes may be traversed in an increasing order, while others maybe considered in a decreasing order. The traversal can be accomplished with three nested loops, indexing on x, y, and z. Although the relative orientations of the data and image planes specify whether each axis should be traversed in an increasing or decreasing manner, the ordering of the axes in the traversal is arbitrary.

An alternative to back-to-front projection is a front-to-back method in which voxels are traversed in the order of increasing distance from the image plane. Although a back-to-front method is easier to implement, a front-to-back method has the advantage that once a voxel is projected onto a pixel, other voxels that project to the same pixel are ignored, since they would be hidden by the first voxel. Another advantage of front-to-back projection methods is that if the axis that is most parallel to the viewing direction is chosen to be the outermost loop of the data traversal, meaningful partial image results can be displayed to the user. This allows the user to better interact with the data and terminate the image generation if, for example, an incorrect view direction was selected.

Clipping planes orthogonal to the three major axes and clipping planes parallel to the view plane are easy to implement using either a back-to-front or a front-to-back algorithm. For orthogonal clipping planes, traversal of the data is limited to a smaller rectangular region within the full data set. To implement clipping planes parallel to the image plane, data samples whose distance to the image plane is less than the distance between the cut plane and the image plane are ignored. This ability to explore the whole data set is a major difference between volume rendering techniques and the surface rendering techniques. In surface rendering techniques, the geometric primitive representation of the object needs to be changed in order to implement cut planes, which could be a time-consuming process. In a back-to-front method, cut planes can be achieved by simply modifying the bounds of the data traversal, and utilizing a condition when placing depth values in the image plane pixels.

The distance of a voxel to the image plane could be stored in the pixel to which it maps along with the voxel value. At the end of a data traversal, a 2D array of depth values called a Z-buffer is created, where the value at each pixel in the Z-buffer is the distance to the closest nonempty voxel. A 2D discrete shading technique can then be applied to the image, resulting in a shaded image suitable for display. The 2D discrete shading techniques described here take as input a 2D array of depth values and a 2D array of projected voxel values, and produce as output a 2D image of intensity values. The simplest 2D discrete shading method is known as depth shading, or depf^-onZy shading [21,74], where only the Z-buffer is used and the intensity value stored in each pixel of the output image is inversely proportional to the depth of the corresponding input pixel. This produces images where features far from the image plane appear dark, while close features are bright. Since surface orientation is not considered in this shading method, most details such as surface discontinuities and object boundaries are lost.

A more accurately shaded image can be obtained by passing the 2D depth image to a gradient-shader [15], which can take into account the object surface orientation and the distance from the light at each pixel to produce a shaded image. This method evaluates the gradient at each (x, y) pixel location in the input image by

where z = D(x, y) is the depth stored in the Z-buffer associated with pixel (x, y). The estimated gradient vector at each pixel is then used as a normal vector for shading purposes.

The value can be approximated using a backward difference D(x, y) — D(x — 1, y), a forward difference D(x + 1, y) — D(x, y), or a central difference 1 (D(x + 1, y) — D(x — 1, y)). Similar equations are used for approximating In general, the central difference is a better approximation of the derivative, but along object edges where, for example, pixels (x, y) and (x + 1, y) belong to two different objects, a backward difference would provide a better approx imation. A context-sensitive normal estimation method [87] was developed to provide more accurate normal estimations by detecting image discontinuities. In this method, two pixels are considered to be in the same "context" if their depth values and the first derivative of the depth at these locations do not greatly differ. The gradient vector at some pixel p is then estimated by considering only those pixels that lie within a user-defined neighborhood and belong to the same context as p. This ensures that sharp object edges and slope changes are not smoothed out in the final image.

The previous rendering methods consider only binary data samples where a value of 1 indicates the object and a value of 0 indicates background. Many forms of data acquisition (e.g., CT) produce data samples with 8, 12, or even more bits of data per sample. If these data samples represent values at some sample points, and the value varies according to some convolution applied to the data samples that can reconstruct the original 3D signal, then a scalar field that approximates the original 3D signal has been defined.

One way to reconstruct the original signal is, as previously described, to define a function /(x, y, z) that determines the value at any location in space. This technique is typically employed by backward-mapping (image-order) algorithms. In forward-mapping algorithms, the original signal is reconstructed by spreading the value at a data sample into space. Westover describes a splatting algorithm [80] for approximating smooth object-ordered volume rendering, in which the value of the data samples represents a density. Each data sample s =(xs, ys, zs, p(s)), s e S, has a function C defining its contribution to every point (x, y, z) in the space,

Cs (x y,z) = ^(x — xs, y — ys,z — zs )p(s), (3)

where is the volume reconstruction kernel and p(s) is the density of sample s that is located at (xs, yszs). The contribution of a sample s to an image plane pixel (x, y) can then be computed by integration,

where the u coordinate axis is parallel to the view ray. Since this integral is independent of the sample density and depends only on its (x, y) projected location, a footprint function F can be defined as

where (x, y) is the displacement of an image sample from the center of the sample's image plane projection. The weight w at each pixel can then be expressed as w (x y)s = F (x — xs , y — ys )> (6)

where (x, y) is the pixel location, and (xs, ys) is the image plane location of the sample s.

A footprint table can be generated by evaluating the integral in Eq. (5) on a grid with a resolution much higher than the image plane resolution. All table values lying outside of the footprint table extent have zero weight and, therefore, need not be considered when generating an image. A footprint table for a data sample 5 can be centered on the projected image plane location of 5 and be sampled in order to determine the weight of the contribution of 5 to each pixel on the image plane. Multiplying this weight by p(5) then gives the contribution of 5 to each pixel.

Computing a footprint table can be difficult because of the integration required. Discrete integration methods can be used to approximate the continuous integral, but generating a footprint table is still a costly operation. Luckily, for orthographic projections, the footprint of each sample is the same except for an image plane offset. Therefore, only one footprint table needs to be calculated per view. Since this would still require too much computation time, only one generic footprint table is precomputed for the kernel. For each view, a view-transformed footprint table is created from the generic footprint table.

Generating a view-transformed footprint table from the generic footprint table can be accomplished in three steps. First, the image plane extent of the projection of the reconstruction kernel is determined. Next, a mapping is computed between this extent and the extent that surrounds the generic footprint table. Finally, the value for each entry in the view-transformed footprint table is determined by mapping the location of the entry to the generic footprint table, and sampling. The extent of the reconstruction kernel either is a sphere, or is bounded by a sphere, so the extent of the generic footprint table is always a circle. If grid spacing of the data samples is uniform along all three axes, then the reconstruction kernel is a sphere and the image plane extent of the reconstruction kernel will be a circle. The mapping from this extent to the extent of the generic footprint table is simply a scaling operation. If grid spacing differs along the three axes, then the reconstruction kernel is an ellipsoid and the image plane extent of the reconstruction kernel will be an ellipse. In this case, a mapping from this ellipse to the circular extent of the generic footprint table must be computed.

There are three modifiable parameters in this algorithm that can greatly affect image quality. First, the size of the footprint table can be varied. Small footprint tables produce blocky images, while large footprint tables may smooth out details and require more space. Second, different sampling methods can be used when generating the view-transformed footprint table from the generic footprint table. Using a nearest-neighbor approach is fast, but may produce aliasing artifacts. On the other hand, using bilinear interpolation produces smoother images at the expense of longer rendering times. The third parameter that can be modified is the reconstruction kernel itself. The choice of, for example, a cone function, Gaussian function, sync function, or bilinear function affects the final image.

Drebin, Carpenter, and Hanrahan [11] developed a technique for rendering volumes that contain mixtures of materials, such as CT data containing bone, muscle, and flesh. In this method, various assumptions about the volume data are made. First, it is assumed that the scalar field was sampled above the Nyquist frequency, or a low-pass filter was used to remove high frequencies before sampling. The volume contains either several scalar fields, or one scalar field representing the composition of several materials. If the latter is the case, it is assumed that material can be differentiated either by the scalar value at each point, or by additional information about the composition of each volume element.

The first step in this rendering algorithm is to create new scalar fields from input data, known as material percentage volumes. Each material percentage volume is a scalar field representing only one material. Color and opacity are then associated with each material, with composite color and opacity obtained by linearly combining the color and opacity for each material percentage volume. A matte volume — that is, a scalar field on the volume with values ranging between 0 and 1 — is used to slice the volume or perform other spatial set operations. Actual rendering of the final composite scalar field is obtained by transforming the volume so that one axis is perpendicular to the image plane. The data is then projected plane by plane in a back-to-front manner and composited to form the final image.

0 0

Post a comment