Visible Isolinc Hon-Visible Igvllne

Screen

FIGURE 4 A two-dimensional scenario.

FIGURE 5 The three step algorithm.

as balancing the work between the hardware and the software. We employ a three-step approach, depicted in Fig. 5. First, we augment Wilhelms and Van Gelder's algorithm [4] by traversing down the octree in a front-to-back order in addition to pruning empty subtrees based on the min-max values stored at the octree nodes. The second step employs coarse software visibility tests for each [meta-] cell that intersects the isosurface. The aim of these tests is to determine whether the [meta-] cell is hidden from the viewpoint by previously extracted sections of the isosurface (hence the requirement for a front-to-back traversal). Finally, the triangulation of the visible cells is forwarded to the graphics accelerator for rendering by the hardware. It is at this stage that the final and exact [partial] visibility of the triangles is resolved. A dataflow diagram is depicted in Fig. 6.

Quickly determining whether a meta-cell is hidden, and thus can be pruned, is fundamental to this algorithm. This is implemented by creating a virtual screen with one bit per pixel. We then project the triangles, as they are extracted, onto this screen and set those bits that are covered, providing an occlusion mask.

Additional pruning of the octree nodes is accomplished by projecting the meta-cell on to the virtual screen and checking if

Visible Isolinc Hon-Visible Igvllne

FIGURE 6 The algorithm data flow.

FIGURE 5 The three step algorithm.

any part of it is visible, i.e., if any of the pixels it covers are not set. If the entire projection of the meta-cell is not visible, none of its children can be visible.

We note that it is important to quickly and efficiently classify a cell asvisible. A hidden cell, and all of its children, will not be traversed further and thus can justify the time and effort invested in the classification. A visible cell, on the other hand, does not gain any benefit from this test and the cost of the visibility test is added to the total cost of extracting the isosurface. As such, the cell visibility test should not depend heavily on the projected screen area; otherwise, the cost would prohibit the use of the test for meta-cells at high levels of the octree — exactly those meta-cells that can potentially save the most.

Two components influence the visibility cost, namely the cost of projecting a point, triangle, or a meta-cell on to the screen and the cost of either scan-converting triangles or determining if a meta-cell projected area contains any unset pixels.

In the next sections, we address these costs in two ways. First, we employ a hierarchical tiling for the virtual screen. Secondly, to reduce the cost of the projection we use a variation of the shear-warp factorization.

We employ hierarchical tiles [17] as a means of fast classification of meta-cells and determining the coverage of extracted triangles. The hierarchical nature of the algorithm ensures that the cost of either of these two operations will not depend highly on their projected area.

FIGURE 8 A triangle tile coverage map.

A coverage map (a tile) is a rectangular bitmap (we use 8x8) in which each bit represents a pixel in the final image. The algorithms is based on the premise that all the possible coverage of a single edge crossing a tile can be precomputed and tabulated based on the points where the edge intersects the tile border (Fig. 7). The coverage pattern of a convex polygon for a particular tile of the image is computed by combining the coverage maps of the polygon edges. The coverage map of a triangle can thus be computed from three precomputed tiles with no dependency on the number of pixels the triangle actually cover (Fig. 8). We refer the reader to the work by Green [17] for a detailed explanation on how the three states (Covered, Partially covered, and Not-covered) can be represented by two tile masks and the rules for combining coverage maps.

Rendering a polygon amounts to computing the coverage map of the polygon for each tile in the image and isolating only those pixels that are coveredby the polygon but were not already covered. In order to accelerate the rendering, the tiles are organized in a hierarchical structure in which each meta-tile represents a block of [meta-] tiles. Under this structure, a polygon is projected onto the top meta-tile and only those subtiles in which the polygon might be visible are checked recursively, leading to a logarithmic search.

Our implementation differs from the one proposed by Greene in that we do not actually render the visible portion of a visible triangle. Rather, we mark the triangle as visible and forward it to the graphics hardware. It is then left to the graphics

FIGURE 8 A triangle tile coverage map.

accelerator to determine which pieces of the triangle are actually visible and correctly render them.

One should note that it is not possible to determine a priori the front-to-back relations between the triangles inside a single cell. It is therefore mandatory to accept all or none of the triangles, even though they need to be projected on the hierarchical tiles one triangle at a time. Figure 9 shows the classification of the cells as well as the portions of the isolines that are extracted. Note that the entire isoline section in a visible cell (shown in light gray) is extracted. The nonvisible portions will be later removed by the graphics accelerator.

An additional feature we employ limits recursion down the octree once the size of a meta-cell is approximately the size of a single pixel. Instead, we forward a single point with an associated normal to the graphics hardware, similar to the dividing cubes method [18]. The normal is estimated by the gradient of the field. The advantage of this method is that the single point potentially represents a large number of polygons since the meta-cell that projects to a pixel may still be high in the octree.

3.3 Warped IsoSurface Extraction (WISE)

A key component in the visibility test is the projection of a point, a triangle, or a meta-cell onto the screen. In general, the perspective projection of a point is a 4x4 transformation followed by two divide operations, for a total of 16 multiplications, 12 additions, and 2 divisions per vertex. Clearly, the

Our implementation differs from the one proposed by Greene in that we do not actually render the visible portion of a visible triangle. Rather, we mark the triangle as visible and forward it to the graphics hardware. It is then left to the graphics

FIGURE 9 Cells and isolines visibility.

FIGURE 7 An edge tile.

cost of performing such transformations for each and every vertex of the projected meta-cells and triangles is too high. In addition, the nonlinearity of the perspective transformation prohibits the use of precomputed transformation table. To accelerate this critical step, we take advantage of the shear-warp factorization of the viewing transformation.

In 1994, Lacroute [19,20] presented a volume rendering method that was based on the shear-warp factorization of the viewing transformation. The underlying idea is to factor the viewing transformation into a shear followed by a warp transformation. The data is first projected into a sheared object space that is used to create an intermediate, albeit warped, image. Once this image is complete, a warping transformation is applied to create the correct final image. Figure 10 illustrates the shear-warp transformation for both orthographic and perspective projections.

The advantage ofthis method is that the intermediate image is aligned with one of the data set faces. This alignment enables the use of a parallel projection of the 3D data set. The warp stage is then applied to a 2D image rather than to each data point.

We now note that the visibility on the image plane and on the warped projection plane are the same (see Fig. 11). In other words, any point in the data set that is visible on the image plane is also visible on the warped projection plane, and similarly, points that would be occluded on the image plane also are occluded on the warped plane. It is therefore sufficient to perform the visibility tests on the warped projection plane. The advantage of this approach is twofold. First, the perspective projection is removed. Second, since the shear and scale factors are, with respect to the current viewpoint, constant for each slice, we can precompute them once for each new view point.

Let [X, Y, Z] be the coordinate system of the data set and let [sx, sy, sz] be the scaling vector of the data with respect to this coordinate system. Let us assume, without loss of generality, that the current warped projection plane is Z = 0. We first transform the current eye location onto the [X, Y, Z] coordinate system and then precompute the shear and scale coefficients:

foreach Z

s = Z * sz/(Z * sz - eyez) scalex [Z] = (1 — s)* sx scaley [Z] = (1 — s)* sy shearx [Z] = s * eyex sheary [Z] = s * eyey.

The projection of any grid point p(x, y, z) can now be computed as

for a total of two multiplications and two additions per vertex.

While the Z coordinate of every grid point is known in advance and thus the shear and scale factor can be precom-puted for each new viewpoint, the same does not hold true for the vertices of the isosurface triangles. However, since the projection onto the warped projection plane is orthographic, it can be shown that a vertex projection is

x = px +s * (eyex— px) y = py + s * (eyey — py)

for a total of two multiplications, five additions, and one division.

Many applications, including most medical imaging techniques, generate scalar fields p(x, y, z) that can be viewed by displaying isosurfaces where p(x, y, z) = piso. Ideally, the value for piso is interactively controlled by the user. When the scalar field is stored as a structured set of point samples, the most common technique for generating a given isosurface is to create an explicit polygonal representation for the surface using a technique such as marching cubes [1,2]. This surface is subsequently rendered with attached graphics hardware accelerators such as the SGI Infinite Reality. Marching cubes can generate an extraordinary number of polygons, which take time to construct and to render. For very large (i.e., greater than several million polygons) surfaces the isosurface extraction and rendering times limit the interactivity. In this paper, we generate images ofisosurfaces directly with no intermediate surface representation through the use of ray tracing. Ray tracing for isosurfaces has been used in the past (e.g., [21-23]), but we apply it to very large data sets in an interactive setting for the first time.

The basic ray-isosurface intersection method used in this paper is shown in Fig. 12. Conventional wisdom holds that ray tracing is too slow to be competitive with hardware z-buffers. However, when rendering a surface from a sufficiently large data set, ray tracing should become competitive as its low time complexity overcomes its large time constant [24]. The same arguments apply to the isosurfacing problem. Suppose we have an nxnx n rectilinear volume that for a

Object

Project

Screen

Object

Project

Object

Shear

Was this article helpful?

## Post a comment