We apply our integrated boundary finding and region-based segmentation to the problem of locating structure from images where there is prior knowledge ofshape that can be represented in a global shape model. We want to determine the surface (or curve, in 2D) parameters that correspond to the structure that matches both the boundary strength in the image and the region homogeneity properties.

Integration can be achieved in a sequential manner, where the region-based segmentation is determined first, and then that information is used to optimize the boundary:

min F!(X) =mm[ f1(X)}, min F2(p, X) = min[ f2(p) + ßfu(p, X)}. (3)

FIGURE 1 Results from our local operator compared to image gradient. (a) Axial slice from a 3D Tl-weighted MR brain image; (b) result from gradient operator; (c) result from our local operator pBC(6*), B = gray matter, C = white matter; (d) pAB(6*),A = CSF, B = gray matter. Our local operator responds selectively to the designated gray-level transition.

FIGURE 1 Results from our local operator compared to image gradient. (a) Axial slice from a 3D Tl-weighted MR brain image; (b) result from gradient operator; (c) result from our local operator pBC(6*), B = gray matter, C = white matter; (d) pAB(6*),A = CSF, B = gray matter. Our local operator responds selectively to the designated gray-level transition.

Here, X represents the output of the region-based process and p represents the output of the boundary finding module. In the first equation, f1 (X) represents the region-based information and it is optimized to agree with the gray-level models for the tissue types and smoothness as defined by the MRF [7]. In the second, f2(p) represents the shape prior-driven Bayesian contribution of the boundary finding module. The contribution of the boundary module is essentially the integral of the boundary strength at each point on the current surface (or curve), thus enforcing agreement of the determined boundary to the boundary strength in the image. f12(p, X) represents the interaction term that uses the output X of the region process and is essentially an integral over the current region of the classification to the desired tissue type, thus encouraging uniform classification within the region. Both modules operate on the image, which forms the common input; however, the region module operates on the gray-level values directly, while the boundary module makes its decision based on boundary strength derived from the image.

A more powerful approach, however, is the game theoretic formulation [29], where two interacting modules are present, one related primarily to boundary finding and the other primarily related to region growing, but each containing coupling terms that feed information related to the other module:

mm F1 (X, p) = min[ fi (X) + ^ (X, p)], min F 2(p, X) = min[ ^(p) + ^(p, X)]. p p

p , a different unique optimal response can be found for D2, and the collection of all these points form l2, the reaction curve of D2. The reaction curve l1 of the first player D1 is similarly constructed. By definition, the Nash solution must lie on both reaction curves, and thus if these curves meet at only one point, the Nash solution exists and is unique.

The Nash equilibrium solution for game theoretic integration (which can be thought of as the most "rational" decision under the circumstances) is the natural counterpart of the global optimum obtained using the sequential objective optimization approach, although we have found it to be more robust to noise and initialization [7]. This technique has been applied to a variety of 2D [7] and 3D [33] problems.

In order to compute the contribution of the region based information, it is necessary to compute a volume integral over the region (in two dimensions, an area integral). We can save a lot of computation, especially when we carry out an iterative optimization procedure, if we convert the volume integral to an area integral using Gauss' divergence theorem [2]. Since an area integral must already be computed because of the boundary information, the order of the computational complexity is not increased. We therefore construct a function whose divergence is the function we wish to integrate by integrating in each of the coordinate directions. Then we can simply compute the area integral of this function during the optimization process (see [33] for details), greatly reducing the necessary computation.

In the first equation, in addition to the MRF term, there is an additional interaction term, f21(X, p), that uses the latest available output p of the boundary module and represents the agreement of the voxels within the current boundary with the assumed gray-level distribution for the indicated tissue type. In the second, the terms are the same, however, the interaction term f12(p, X) uses the latest available output X of the region process. Here, instead of sequential optimization, the modules assume the roles of players in a two-player game and are optimized in parallel. The cost function of each module includes the output from the other module that comes as feedback. The game continues until the players find it impossible to improve their positions unilaterally, that is, without cooperation from the other player. This natural stopping point of the parallel decision making process constitutes the Nash equilibrium solution.

To visualize the Nash equilibrium [1], Fig. 2 shows constant level or isocost curves of F1(-) and F2(-) for a simple two-dimensional situation. For fixed p1, say, p1 = p1, the best D2 can do is to minimize F2 along the line p1 = p1. Assuming that this minimization problem admits a unique solution, the optimal response of the second ( player) decision maker D2 is determined in the figure as the point where the line p1 = p1 is tangent to an isocost curve F2(-) = constant. For each different

Testing of the integrated method based on the use of shape prior information and game theoretic reasoning has been performed on a range of synthetic 3D shapes, as shown in Fig. 3, as well as a variety of human data (as in Fig. 4) [33]. Plots of the robustness of this approach to noise and initialization are shown in Fig. 3 showing the improvement of the integrated method.

In the brain, some subcortical structures often have poor contrast between gray and white matter. Some gray-matter structures have striate regions, which appear with intermediate intensity. These structures, however, are less variable, in terms of shape, than the cortex. Thus, prior shape information is often necessary here in order to identify boundaries. In Fig. 4, we demonstrate the performance of the 3D integrated method on subcortical structure examples: the head of the right caudate nucleus and the left thalamus. Using the integrated method with a prior shape model, along with region and boundary information, the proper boundaries are found.

Although this method is fairly robust to initialization, the use of a prior shape model sometimes requires manual initialization of position and pose. We have developed a 3D

FIGURE 3 (Left) Synthetic shape used to generate noisy 3D test image. (Middle) Noise performance of the surface finder with and without region information. The combined method has a lower average error for this example, especially at low SNR. (Right) Performance of the surface finder with and without region information under different starting positions varied by shifting the initialization vertically. Clearly, the combined method is superior.

FIGURE 3 (Left) Synthetic shape used to generate noisy 3D test image. (Middle) Noise performance of the surface finder with and without region information. The combined method has a lower average error for this example, especially at low SNR. (Right) Performance of the surface finder with and without region information under different starting positions varied by shifting the initialization vertically. Clearly, the combined method is superior.

FIGURE 4 Results of surface finding for the head of the left caudate nucleus (top row) and the right thalamus (bottom row) in an MR image. (Left) Three perpendicular slices through the 3D image (1.2 mm3 voxels) are shown with the surface obtained using both the boundary and the region information and the wireframe. (Right) Manual delineation of the same structures showing good agreement.

FIGURE 4 Results of surface finding for the head of the left caudate nucleus (top row) and the right thalamus (bottom row) in an MR image. (Left) Three perpendicular slices through the 3D image (1.2 mm3 voxels) are shown with the surface obtained using both the boundary and the region information and the wireframe. (Right) Manual delineation of the same structures showing good agreement.

visualization tool [31] that allows control of the translation, rotation and scale of the initial model, with reference to the 3D dataset, as shown in Fig. 5.

Was this article helpful?

## Post a comment