Figure

Plate 129.

An X-ray projection of an MRI scan of a brain. See also

FIGURE 6 A composited projection of an MRI scan of a brain. See also Plate 130.

color and opacity at each grid location, can be generated using preprocessing techniques. The interpolation functions f(x, y, z), fc(x, y, z), and fa(x, y, z), which specify the sample value, color, and opacity at any location in R3, are then defined. f. and fX are often referred to as transfer functions.

Generating the array Sc of color values involves performing a shading operation, such as gray-level shading, at each data sample in the original array S. For this purpose, the Phong illumination model could be used. The normal at each data sample is the unit gradient vector at that location. The gradient vector at any location can be computed by partially differentiating the interpolation function with respect to x, y, and z to get each component of the gradient. If the interpolation function is not first derivative continuous, aliasing artifacts will occur in the image because of the discontinuous normal vector. A smoother set of gradient vectors can be obtained using a central differencing method similar to the one described earlier in this section.

Calculating the array Sa is essentially a surface classification operation. There are different ways to classify surfaces within a scalar field, and each way requires a new mapping from S(x, y, z) to Sa(x, y, z). When an isosurface at some constant value v with an opacity av ought to be viewed, Sa (x, y, z) is simply assigned to av if S(x, y, z) is v; otherwise, Sa(x, y, z) =0. This would produce aliasing artifacts, which can be reduced by setting Sa(x, y, z) close to av if S(x, y, z) is close to v. The best results are obtained when the thickness of the transition region is constant throughout the volume. This can be approximated by having the opacity fall off at a rate inversely proportional to the magnitude of the local gradient vector. Multiple isosurfaces can be displayed in a single image by separately applying the classification mappings, then combining the opacities.

Once the Sc (x, y, z) and Sa (x, y, z) arrays have been determined, rays are cast from the pixels, through these two arrays, sampling at evenly spaced locations. To determine the value at a location, the trilinear interpolation functions fc and fa are used. Once these point samples along the ray have been computed, a fully opaque background is added in, and then the values in a back-to-front order are composited to produce a single color that is placed in the pixel.

Two rendering techniques for displaying volumetric data, known as the V-Buffer method, were developed by Upson and Keeler [73]. One of the methods for visualizing the scalar field is an image-order ray-casting technique: Hereby, rays are cast from each pixel on the image plane into the volume. For each cell in the volume along the path of this ray, the scalar value is determined at the point where the ray first intersects the cell. The ray is then stepped along until it traverses the entire cell, with calculations for scalar values, shading, opacity, texture mapping, and depth cuing performed at each stepping point. This process is repeated for each cell along the ray, accumulating color and opacity, until the ray exits the volume, or the accumulated opacity reaches unity. At this point, the accumulated color and opacity for that pixel are stored, and the next ray is cast.

The goal of this method is not to produce a realistic image, but instead to provide a representation of the volumetric data that can be interpreted by a scientist or an engineer. For this purpose, the user is given the ability to modify certain parameters in the shading equations which will lead to an informative, rather than physically accurate, shaded image. A simplified shading equation is used where the perceived intensity as a function of wavelength, I (1), is defined as

In this equation, Ka is the ambient coefficient, Ia is the ambient intensity, Kd is the diffuse coefficient, N is the normal approximated by the local gradient, Lj is the vector to the jth light source, and Ij is the intensity of the jth light source. In order to highlight certain features in the final image, the diffuse coefficient can be defined as a function of not only wavelength, but also scalar value and solid texture:

Kd(1, S, M) = K(1) Td(1, S(x, y, z)M(1, x, y, z)). (9)

Here, K is the actual diffuse coefficient, Td is the color transfer function, S is the sample array, and M is the solid texture map. The color transfer function is defined for red, green, and blue and maps scalar value to intensity. In this method the following intensity integral is approximated when accumulating along the ray:

I(1) = Í \x(d)O(s) \Ka(l)Ia + Kd(1, S, M)J2l(N • L})I}]

+ Kd(1, S, M• L)Ij] + (1 — x(d)bc(1)) dx dy dz.

rays are cast from the eye point through each pixel on the image plane and into the volume. The intensity I of light for a given pixel is calculated according to rh —r p'(X)d1 I = e J'i py (t)dt.

Here, x(d) represents atmospheric attenuation as a function of distance d, O(s) is the opacity transfer function, be is the background color, and u is a vector in the direction of the view ray. The opacity transfer function is similar to the color transfer function in that it defines opacity as a function of scalar value. Different color and opacity transfer functions can be defined to highlight different features in the volume. However, selection of the desired transfer function is very difficult [19].

The second method for visualizing the scalar field is a cell-by-cell processing technique [73], where within each cell an image-order ray-casting technique is used, thus making this a hybrid technique. In this method, each cell in the volume is processed in front-to-back order. Processing begins on the plane closest to viewpoint, and progresses in a plane-by-plane manner. Within each plane, processing begins with the cell closest to the viewpoint, then continues in order of increasing distance from the viewpoint. Each cell is processed by first determining, for each scan line in the image plane, which pixels are affected by the cell. Then, for each pixel an integration volume is determined. Within the bounds of the integration volume, an intensity calculation similar to Eq. (10) is performed according to

This process continues in a front-to-back order until all cells have been processed, with intensity accumulated into pixel values. Once a pixel opacity reaches unity, a flag is set and this pixel is not processed further. Because of the front-to-back nature of this algorithm, incremental display of the image is possible.

In order to simulate light coming from translucent objects, volumetric data with data samples representing density values can be considered as a field of density emitters [58]. A density emitter is a tiny particle that both emits and scatters light. The number of density emitters in any small region within the volume is proportional to the scalar value in that region. These density emitters are used to correctly model the occlusion of deeper parts of the volume by closer parts, yet both shadowing and color variation due to differences in scattering at different wavelengths are ignored. These effects are ignored because it is believed that they would complicate the image, detracting from the perception of density variation. As in the V-Buffer method,

In this equation, the ray is traversed from t1 to t^, accumulating at each location t the density py(t) at that location attenuated by the probability

e Jti that this light will be scattered before reaching the eye. The parameter % is modifiable and controls the attenuation, with higher values of % specifying a medium that darkens more rapidly. The parameter y is also modifiable and controls the spread of density values. Low y values produce a diffuse cloud appearance, while higher y values highlight dense portions of the data. For each ray, three values in addition to I maybe computed: the maximum value encountered along the ray; the distance at which that maximum occurred; and the center of gravity of density emitters along the ray. By mapping these values to different color parameters (such as hue, saturation, and lightness), interesting effects can be achieved.

Krueger [35] showed that the various existing volume rendering models can be described as special cases of an underlying transport theory model of the transfer of particles in nonhomogeneous media. The basic idea is that a beam of "virtual" particles is sent through the volume, with the user selecting the particle properties and laws of interaction between the particles and data. The image plane then contains the "scattered" virtual particles, and information about the data is obtained from the scattering pattern. If, for example, the virtual particles are chosen to have the properties of photons, and the laws of interaction are governed by optical laws, then this model essentially becomes a generalized ray tracer. Other virtual particles and interaction laws can be used, for example, to identify periodicities and similar hidden symmetries of the data.

Using Kruger's transport theory model, the intensity of light I at a pixel can be described as a path integral along the view ray:

The emission at each point p along the ray is scaled by the optical depth to the eye to produce the final intensity value for a pixel. The optical depth is a function of the total extinction coefficient, which is composed of the absorption coefficient aa and the scattering coefficient asc. The generalized source Q(p) is defined as

This generalized source consists of the emission at a given point q(p) and incoming intensity along all directions scaled by the scattering phase px. Typically, a low albedo approximation is used to simplify the calculations, reducing the integral in Eq. (14) to a sum over all light sources.

0 0

Post a comment