Surface Rendering

Surface rendering techniques characteristically require the extraction of contours (edges) that define the surface of the structure to be visualized, and representing that surface by a mosaic of connected polygons. A tiling algorithm is applied that places surface patches (or tiles) at each contour point, and with hidden surface removal and shading, the surface is rendered visible. The advantage of this technique lies in the relatively small amount of contour data, resulting in fast

FIGURE 2 Interactive orthogonal sectioning of a 3D volume image. Left panel shows intersecting orthogonal planes and right panel shows cubic volume dissection. Various sliders are used to control the interactive orthogonal sectioning.

rendering speeds. Also, standard computer graphics techniques can be applied, including shading models (Phong, Gouraud), and the technique can take advantage of particular graphics hardware to speed the geometric transformation and rendering processes. The polygon-based surface descriptions can be transformed into analytical descriptions, which permits use with other geometric visualization packages (i.e., CAD/CAM software), and the contours can be used to drive machinery to create models of the structure. Other analytically defined structures can be easily superposed with the surface rendered structures. The disadvantages of this technique are largely based on the need to discretely extract the contours defining the structure to be visualized. Other volume image information is lost in this process, which may be important for slice generation or value measurement. This also prohibits any interactive, dynamic determination of the surface to be rendered, as the decision has been made during contour extraction specifically which surface will be visualized. Finally, because of the discrete nature of the surface polygon placement, this technique is prone to sampling and aliasing artifacts on the rendered surface [23].

Shaded surface displays are useful if there is a need or desire to visualize specific 3D surfaces. This is the case in many situations of practical importance. Shaded surface displays are a 2D representation of a 3D surface. The 3D nature of the surface is conveyed with the aid of visual cues such as perspective, shading, texture, shadowing, and stereopsis. Generally, shaded surface displays are not well suited to "immediate" full visualization of the 3D volume; that is, they require some preprocessing of the 3D data to extract the desired surfaces. Shaded surface displays have proven popular in many applications, since, once the surfaces of interest have been determined, it is not difficult to quickly compute images for display. In most algorithms, the surfaces are described in terms of polygon patches that join to form the complete surface. Modern computational systems can process tens of thousands of polygon patches per second. This speed permits satisfactory interactive capabilities in computer-aided design applications but does not always satisfy the requirements of interactive display of biological/medical images. This is because some computed 3D surfaces of anatomic structures may contain hundreds of thousands of polygons. Unless the number of polygon faces can be greatly reduced, even state-of-the-art display systems may not provide acceptable user

''iAMywFt^ j | 7J Ofoqwitfi .^'(h.' II lly - ubl^iin. _lh- *■■..■■ ■; | ; p^tijcj

FIGURE 3 Arbitrarily oriented section (bottom center) cut obliquely through the 3D volume image can be interactively selected and computed. Interactive orientation is controlled by aeronautical maneuvers (pitch, roll, yaw, elevate options seen at right), with a graphical indication of the intersection of the oblique plane with orthogonal images (top row) used as orientation reference.

''iAMywFt^ j | 7J Ofoqwitfi .^'(h.' II lly - ubl^iin. _lh- *■■..■■ ■; | ; p^tijcj

FIGURE 3 Arbitrarily oriented section (bottom center) cut obliquely through the 3D volume image can be interactively selected and computed. Interactive orientation is controlled by aeronautical maneuvers (pitch, roll, yaw, elevate options seen at right), with a graphical indication of the intersection of the oblique plane with orthogonal images (top row) used as orientation reference.

interaction. Special-purpose graphics hardware can achieve the necessary speed but increases the cost of the computer system.

Several investigators have implemented methods for 3D shaded surface displays of anatomic structures [7,20,21,34,54]. Some follow an approach similar to the techniques used in solid modeling. The main difficulty in display of biological structures is fitting tiles to a complex 3D surface. This problem can be circumvented by defining the surface elements as the 3D pixel (voxel) borders. This makes the algorithm easy to apply to "real" data. However, the task of displaying a smooth surface from such elements is a problem. One approach is to employ a contextual shading scheme in which the shading of a displayed face depends on the orientation of its neighbors, its distance from the observer, and the incident angle of the light [57]. Implementation of this type of algorithm has produced useful displays of 3D images but has also proven to be cumbersome, especially when the algorithm is used to detect and display soft tissues. The time required to segment the volume, isolate the desired surfaces (as is required by any 3D surface display), and convert the faces to the program's internal description must be added to the display time. Even though these times are required only once for each volume, if several volumes are to be analyzed (as is almost always the case), they are too slow for efficient analysis.

The structure of surface rendering algorithms generally permits incorporation of special display effects, such as dissection of the volume. This is implemented by inserting a preprocessing program to manipulate the binary volume prior to the contour extraction. For example, "volume dissection'' is accomplished by producing new binary volumes such that the

FIGURE 4 Tracing along an arbitrary path on an orthogonal image (top left) interactively generates and displays a curved planar image (right) sampled through the 3D volume image along that trace.

FIGURE 4 Tracing along an arbitrary path on an orthogonal image (top left) interactively generates and displays a curved planar image (right) sampled through the 3D volume image along that trace.

voxels in a region of interest are isolated and processed separately. The region of interest can be related to the entire volume at subsequent points in the analysis. Figure 5 illustrates rendering of the skull and brain from a thresholded CT scan of the head. The polygonal representation of each of these surfaces is also shown as a wireframe model.

0 0

Post a comment