Data Sets

Yarden Livnat 1 Introduction 731

Steven G. Parker 2 Accelerated Search 732

Christopher R. Johnson 2.1 The Span Space • 2.2 The NOISE Algorithm • 2.3 Optimization • 2.4 Other Span Space

University of Utah Algorithms

3 View Dependent Algorithm 735

3.1 Visibility • 3.2 Image Space Culling • 3.3 Warped IsoSurface Extraction (WISE)

4 Real-Time Ray Tracing 738

4.1 Ray-Isosurface Intersection • 4.2 Optimizations • 4.3 Real-Time Ray Tracing Results

5 Example Applications 742

References 744

1 Introduction

Isosurface extraction is a powerful tool for investigating volumetric scalar fields and has been used extensively in medical imaging ever since the seminal paper by Lorensen and Kline on marching cubes [1,2]. In medical imaging applications, isosurfaces permit the extraction of anatomical structures and tissues.

Since the inception of medical imaging, scanners continually have increased in their resolution capability. This increased image resolution has been instrumental in the use of 3D images for diagnosis, surgical planning, and with the advent of the GE Open MRI system, for surgery itself. Such increased resolution, however, has caused researchers to look beyond marching cubes in order to obtain near-interactive rates for such large-scale imaging data sets. As such, there has been a renewed interested in creating isosurface algorithms that have optimal complexity and can take advantage of advanced computer architectures.

In this chapter, we discuss three techniques developed by the authors (and colleagues) for fast isosurface extraction for large-scale imaging data sets. The first technique is the near optimal isosurface extraction (NOISE) algorithm for rapidly extracting isosurfaces. Using a new representation, termed the span space, of the underlying domain, we develop an isosurface extraction algorithm with a worst-case complexity of O^/n + k) for the search phase, where n is the size of the data set and k is the number of cells in the isosurface. The memory requirement is kept at O(n) while the preprocessing step is O(nlog n). We note that we can utilize the span space representation as a tool for comparing other isosurface extraction methods on structured (and unstructured) grids.

While algorithms such as NOISE effectively have eliminated the search phase bottleneck, the cost of constructing and rendering the isosurface remains high. Many of today's large imaging data sets contain very large and complex isosurfaces that can easily overwhelm even state-of-the-art graphics hardware. As such, we discuss an output-sensitive algorithm that is based on extracting only the visible portion of the isosurface. This output-sensitive algorithm is based on extracting only the visible portion of the isosurface. The visibility tests are done in two phases. First, coarse visibility tests are performed in software to determine the visible cells. These tests are based on hierarchical tiles and shear-warp factorization. The second phase resolves the visible portions of the extracted triangles and is accomplished by the graphics hardware.

When an isosurface is extracted from a large imaging data set by the previous two techniques (or by other marching cubelike methods) an explicit polygonal representation for the surface is created. This surface is subsequently rendered with attached graphics hardware accelerators. Such explicit geo

Copyright © 2000 by Academic Press.

All rights of reproduction in any form reserved.

metry-based isosurface extraction methods can generate an extraordinary number of polygons, which take time to construct and to render. For very large (i.e., greater than several million polygons) surfaces the isosurface extraction and rendering times limit the interactivity.

In the third technique we describe, we generate images of isosurfaces directly with no intermediate surface representation through the use of ray tracing. Using parallel processing and incorporating simple optimizations enables interactive rendering (i.e., 10 frames per second) of the 1-Gbyte full resolution visible woman CT data set on an SGI Origin 2000.

2 Accelerated Search 2.1 The Span Space

Let $ : G-V be a given field and let & be a sample set over such that,

& ={di}; di e D = GxV, where G^Rp is a geometric space and V <=Rq, for some p, qeZ, is the associated value space. Also, let d = ||&|| be the size of the data set.

Definition 1 (Isosurface Extraction): Given a set of samples & over a field $ : G— V, and given a single value v e V, find,

S ={gi} gi e G such that ) = v. Note that S, the isosurface, need not be topologically simple.

Approximating an isosurface, S, as a global solution to Eq. (1) can be a difficult task because of the sheer size, d, of a large imaging data set.

In thinking about imaging data sets, one can decompose the geometric space, G, into a set of polyhedral cells (voxels), C, where the data points define the vertices. While n = ||C||, the number of cells, is typically an order of magnitude larger than d, the approximation of the isosurface over C becomes a manageable task. Rather than finding a global solution one can seek a local approximation within each cell. Hence, isosurface extraction becomes a two-stage process: locating the cells that intersect the isosurface and then, locally, approximating the isosurface inside each such cell [1,2]. We focus our attention on the problem of finding those cells that intersect an isosurface of a specified isovalue.

On structured grids, the position of a cell can be represented in the geometric space G. Because this representation does not require explicit adjacency information between cells, isosurface extraction methods on structured grids conduct searches over the geometric space, G. The problem as stated by these methods [3-6] is defined as follows:

Approach 1 (Geometric Search): Given a value ve V and given a set C of cells in G space where each cell is associated with a set of values { v;-} e V space, find the subset ofC that an isosurface, of value v, intersects.

Another approach is to forgo the geometric location of a cell and examine only the values at the cell vertices. The advantage of this approach is that one needs only to examine the minimum and maximum values of a cell to determine if an isosurface intersects that cell. Hence, the dimensionality of the problem reduces to two for scalar fields.

Current methods for isosurface extraction that are based on this value space approach [7-9] view the isosurface extraction problem in the following way:

Approach 2 (Interval Search): Given a value v e V and given a set of cells represented as intervals,

find the subset Is such that

Is <=I and a{ < v < b{ V(a{, b{) e Is, where a norm should be used when the dimensionality of V is greater than 1.

The method presented in this section addresses the search over the value space. Our approach is not to view the problem as a search over intervals in V but rather as a search over points in V2. We start with an augmented definition of the search space.

Definition 2 (The Span Space): Let C be a given set of cells. Define a set of points P = {} over V2 such that

Vci e C associate, pi =(ai, bi), where a{ = min{ v} and bi = max{ v}, i j and {vj}{ are the values of the vertices of cell i.

Though conceptually not much different from the interval space, the span space will, nevertheless, lead to a simple and near-optimal search algorithm.

A key point is that points in two dimensions exhibit no explicit relations between themselves, while intervals tend to be viewed as stacked on top of each other, so that overlapping intervals exhibit merely coincidental links. Points do not exhibit such arbitrary ties and in this respect lend themselves to many different organizations. However, as we shall show later, previous methods grouped these points in very similar ways, because they looked at them from an interval perspective.

Using our augmented definition, the isosurface extraction problem can be stated as follows.

Approach 3 (The Span Search): Given a set of cells, C, and its associated set of points, P, in the span space, and given a value v e V, find the subset Ps <= P, such that

We note that V(Xj, yj) e Ps, Xj < yj and thus the associated points will lie on or above the line y{ = x{. A geometric perspective of the span search is given in Fig. 1.

2.2 The NOISE Algorithm

A common obstacle for all the interval methods was that the intervals were ordered according to either their maximum or their minimum value. The sweeping simplicies algorithm [9] attempted to tackle this issue by maintaining two lists of the intervals, ordered by the maximum and minimum values. What was missing, however, was a way to combine these two lists into a single list.

In the following, we present a solution to this obstacle. Using the span space as our underlying domain, we employ a kd-tree as a means for simultaneously ordering the cells according to their maximum and minimum values.

Kd-Trees

Kd-trees were designed by Bentley in 1975 [10] as a data structure for efficient associative searching. In essence, kd-trees are a multidimensional version of binary search trees. Each node in the tree holds one of the data values and has two subtrees as children. The subtrees are constructed so that all the nodes in one subtree, the left one for example, hold values that are less than the parent node's value, while the values in the right subtree are greater than the parent node's value.

Binary trees partition data according to only one dimension. Kd-trees, on the other hand, utilize multidimensional data and partition the data by alternating between each of the dimensions of the data at each level of the tree.

Search over the Span Space Using Kd-Tree

Given a data set, a kd-tree that contains pointers to the data cells is constructed. Using this kd-tree as an index to the data set, the algorithm can now rapidly answer isosurface queries. Figure 2 depicts a typical decomposition of a span space by a kd-tree.

Construction

The construction of the kd-trees can be done recursively in optimal time 0(nlog n). The approach is to find the median of the data values along one dimension and store it at the root node. The data is then partitioned according to the median and recursively stored in the two subtrees. The partition at each level alternates between the min and max coordinates.

An efficient way to achieve 0(nlog n) time is to recursively find the median in 0(n), using the method described by Blum et al. [11], and partition the data within the same time bound.

A simpler approach is to sort the data into two lists according to the maximum and minimum coordinates, respectively, in order 0(nlog n). The first partition accesses the median of the first list, the min coordinate, in constant

FIGURE 1 Search over the span space.

FIGURE 2 Kd-tree.

time, and marks all the data points with values less than the median. We then use these marks to construct the two subgroups, in O(n), and continue recursively.

Though the preceding methods have complexity of O(nlog n), they do have weaknesses. Finding the median in optimal time of O(n) is theoretically possible, yet difficult to program. The second algorithm requires sorting two lists and maintaining a total of four lists of pointers. Although it is still linear with respect to its memory requirement, it nevertheless poses a problem for very large data sets.

A simple (and we think elegant) solution is to use a Quicksort-based selection [12]. Although this method has a worst case of O(n2), the average case is only O(n). Furthermore, this selection algorithm requires no additional memory and operates directly on the tree.

It is clear that the kd-tree has one node per cell, or span point, and thus the memory requirement of the kd-tree is O(n).

Query

Given an isovalue, v, we seek to locate all the points in Fig. 1 that are to the left of the vertical line at v and are above the horizontal line at v. We note that we do not need to locate points that are on these horizontal or vertical lines if we assume nondegenerate cells, for which minimum or maximum values are not unique. We will remove this restriction later.

The kd-tree is traversed recursively when the isovalue is compared to the value stored at the current node alternating between the minimum and maximum values at each level. If the node is to the left (above) of the isovalue line, then only the left (right) subtree should be traversed. Otherwise, both subtrees should be traversed recursively. For efficiency we define two search routines, SearchMinMax and SearchMaxMin. The dimension we currently checking is the first named, and the dimension we still need to search is named second. The importance of naming the second dimension will be evident in the next section, when we consider optimizing the algorithm.

Following is a short pseudocode for the min-max routine.

SearchMinMax (isovalue, node) {

if (node.min<isovalue) { if (node.max> isovalue)

construct a polygon(s) from node SearchMaxMin (isovalue, node.right);

SearchMaxMin (isovalue, node.left);

Estimating the complexity of the query is not straightforward. Indeed, the analysis of the worst case was developed by Lee and Wong [13] only several years after Bentley introduced kd-trees. Clearly, the query time is proportional to the number of nodes visited. Lee and Wong analyzed the worst case by constructing a situation where all the visited nodes are not part of the final result. Their analysis showed that the worst-case time complexity is O(^/n + k). The average case analysis of a region query is still an open problem, though observations suggest it is much faster than O(y/n + k) [12,14]. In almost all typical applications k ~ n2/3 > ^Jn, which suggests a complexity of only O(k). On the other hand, the complexity of the isosurface extraction problem is Q(k), because it is bound from below by the size of the output. Hence, the proposed algorithm, NOISE, is optimal, d(k), for almost all cases and is near optimal in the general case.

Degenerate Cells

A degenerate cell is defined as a cell having more than one vertex with a minimum or maximum value. When a given isovalue is equal to the extremum value of a cell, the isosurface will not intersect the cell. Rather, the isosurface will touch the cell at a vertex, an edge, or a face, based on how many vertices share that extrema value. In the first two cases, vertex or edge, the cell can be ignored. The last case is more problematic, as ignoring this case will lead to a hole in the isosurface. Furthermore, if the face is not ignored, it will be drawn twice.

One solution is to perturb the isovalue by a small amount, so that the isosurface will intersect the inside of only one of those cells. Another solution is to check both sides of the kd-tree when such a case occurs. While the direct cost of such an approach is not too high as this can happen at most twice, there is a higher cost in performing an equality test at each level. We note that in all the data sets we tested there was not a single case of such a degeneracy.

2.3 Optimization

The algorithm presented in the previous section is not optimal with regard to the memory requirement or search time. We now present several strategies to optimize the algorithm.

Pointerless Kd-Tree

A kd-tree node, as presented previously, must maintain links to its two subtrees. This introduces a high cost in terms of memory requirements. To overcome this, we note that in our case the kd-tree is completely balanced. At each level, one data point is stored at the node and the rest are equally divided between the two subtrees. We can therefore represent a pointerless kd-tree as a one-dimensional array of the nodes. The root node is placed at the middle of the array, while the first n/2 nodes represent the left subtree and the last (n — 1)/2 nodes the right subtree.

When we use a pointerless kd-tree, the memory requirements for our kd-tree, per node, reduce to two real numbers, for minimum and maximum values, and one pointer back to the original cell for later usage. Considering that each cell, for a 3D application with tetrahedral cells has pointers to four vertices, the kd-tree memory overhead is even less than the size of the set of cells.

The use of a pointerless kd-tree enables one to compute the tree as an off-line preprocess and load the tree using a single read in time complexity of only O(n). Data acquisition via CT/ MRI scans or scientific simulations is generally very time consuming. The ability to build the kd-tree as a separate preprocess allows one to shift the cost of computing the tree to the data acquisition stage, hence reducing the impact of the initialization stage on the extraction of isosurfaces for large data sets.

Optimized Search

The search algorithm can be further enhanced. Let us consider, again, the min-max (max-min) routine. In the original algorithm, if the isovalue is less than the minimum value of the node, then we know we can trim the right subtree. Consider the case where the isovalue is greater than the node's minimum coordinate. In this case, we need to traverse both subtrees. We have no new information with respect to the search in the right subtree, but, for the search in the left subtree we know that the minimum condition is satisfied. We can take advantage of this fact by skipping over the odd levels from that point on. To achieve this, we define two new routines, search-min and search-max. Adhering to our previous notation, the name search-min states that we are only looking for a minimum value.

Examining the search-min routine, we note that the maximum requirement is already satisfied. We do not gain new information if the isovalue is less than the current node's minimum and again only trim off the right subtree. If the isovalue is greater than the node's minimum, we recursively traverse the right subtree, but with regard to the left subtree, we now know that all of its points are in the query's domain. We therefore need only to collect them. Using the notion of pointerless kd-tree as proposed in the previous subsection, any subtree is represented as a contiguous block of the tree's nodes. Collecting all the nodes of a subtree requires only sequentially traversing this contiguous block.

We remark that with the current performance of the algorithm and current available hardware, the bottleneck is no longer in finding the isosurface or even in computing it, but rather in the actual time it takes to display it. As such, we look next at a new view-dependent algorithm that constructs and displays only the part of the isosurface that is visible to the user.

2.4 Other Span Space Algorithms

The span space representation has been used by Cignoni et al. [15] to reduce the complexity of the search phase to O(log n + k) at the expense of higher memory requirements.

Shen et al. [16] used a lattice decomposition of the span space for a parallel version on a massive parallel machine.

3 View Dependent Algorithm

The proposed method is based on the observation that isosurfaces extracted from very large data sets often exhibit high depth complexity for two reasons. First, since the data sets are very large, the projection of individual cells tend to be subpixel. This leads to a large number of polygons, possibly nonoverlapping, projecting onto individual pixels. Secondly, for some data sets, large sections of an isosurface are internal and thus, are occluded by other sections of the isosurface, as illustrated in Fig. 3. These internal sections, common in medical data sets, cannot be seen from any direction unless the external isosurface is peeled away or cut off. Therefore, if one can extract just the visible portions of the isosurface, the number of rendered polygons will be reduced, resulting in a faster algorithm. Figure 4 depicts a two-dimensional scenario. In view-dependent methods only the solid lines are extracted, whereas in non-view-dependent isocontouring both solid and dotted are extracted.

The proposed algorithm, which is based on a hierarchical traversal of the data and a marching cubes triangulation, exploit coherency in the object, value, and image spaces, as well

FIGURE 3 A slice through an isosurface reveal the internal sections which cannot contribute to the final image.
0 0

Post a comment