Comparison of Volume Rendering and Volume Modeling

Volume modeling is a process of depicting multiple interfaces or surfaces throughout a volume. The individual surfaces are generally displayed using surface rendering, but volume rendering and surface rendering can also be combined in a volume modeling process [26,56]. The general sequence of steps involved in volume modeling [7,26,34,55] is illustrated in Fig. 11. The process includes explicit segmentation; identification of the segmented surface, with or without feature extraction to help drive the modeling process; and tiling to represent the identified surface as a set of polygons connected and configured so as to reflect the extracted features (e.g., surface curvature).

Figure 12 shows a comparison of volume rendering and volume modeling of segmented brain obtained from a 3D MRI scan of the head. The volume rendered image shows intricate detail in the convolutions of the brain parenchyma, and this level of detail can be displayed throughout the volume. The surface or volume model is less detailed, but can be rendered more rapidly. The surface model can also be made transparent to reveal underlying surfaces that have been segmented and tiled within the brain volume. In particular, the interior ventricular system is shown in Figure 12. The reduced level of detail in the volume modeling approach allows the images to be manipulated in real time or near real time. The models

FIGURE 8 Diagram of ray geometry used to render selected voxels transparent (or opaque).

convey accurate size and shape useful in applications such as surgical planning. Figure 13 indicates the level of detail that can be rendered in volume models if high-resolution scans are obtained. This image data was rendered from careful segmentation and surface modeling of the high spatial resolution (1/3 mm slices) cryosection data from the Visible Human Female Dataset from the National Library of Medicine [1]. The ocular orbits, cornea, ocular muscles, and optic nerves can be seen, along with structural elements of the inner ear, including the semicircular canals, cochlea, and vestibule.

Figure 14 shows similar high resolution volume renderings of registered multimodality data, in particular CT and MR image data. The rendering of the skull in the upper left-hand corner shows the image quality that can be obtained with current volume rendering algorithms. The rendering can be made transparent, as shown in the upper right-hand corner, to reveal inner hard tissue interfaces such as the sinuses. Transparency can also be used to render soft tissue visible underlying hard tissue, as indicated in the lower left-hand panel where the cerebral cortex is revealed through the skull bone. Using perspective in the volume rendering permits accurate close-up views from within the rendered structures, as shown in the lower right-hand corner. Here a posterior-to-anterior view from inside the head is constructed, revealing the mandible and frontal sinuses from a point of view near the back of the jaw. Finally, Fig. 15 shows an example of a deformable volume model texture mapping [6, 32, 70, 71]. A portion of the torso of the Visible Human Male [1] is segmented and modeled with a mesh of 40,000 polygons, each of which are mapped with color values from corresponding points on the raw cryosection data. Cutting can be simulated by deforming the polygon mesh and applying the new textures in near real time in response to imposed forces applied locally to the volume model [32].

0 0

Post a comment