Introduction

In this chapter our aim is twofold. Firstly, we point out some limitations of deformable models for medical images and analyze recent works to overcome these limitations. Next, we offer new perspectives in the area, which are part of our current research in this field.

Deformable models, which include the popular snake models [42] and deformable surfaces [19,48], are well-known techniques for tracking, boundary extraction, and segmentation in 2D/3D images.

Basically, these models can be classified into three categories: parametric, geodesic snakes, and implicit models. The relationships between these models have been demonstrated in several works in the literature [57,75].

Parametric deformable models consist of a curve (or surface) which can dynamically conform to object shapes in response to internal (elastic) forces and external forces (image and constraint ones) [6].

For geodesic snakes, the key idea is to construct the evolution of a contour as a geodesic computation. A special metric is proposed (based on the gradient of the image field) to let the minimal length curve correspond to the desired boundary. This approach allows one to address the parameterization

1 National Laboratory for Scientific Computing, Brazil dependence of parametric snake models and can be extended to 3D through the theory of minimal surfaces [11,57].

Implicit models, such as the formulation used in [46], consist of embedding the snake as the zero level set of a higher dimensional function and to solve the corresponding equation of motion. Such methodologies are best suited to the recovery of objects with unknown topologies.

Parametric deformable models are more intuitive than the implicit and geodesic ones. Their mathematical formulation makes it easier to integrate image data, initial estimated, desired contour properties and knowledge-based constraints, in a single extraction process [6].

However, parametric models also have their limitations. First, most of these methods can only handle objects with simple topology. The topology of the structures of interest must be known in advance since the mathematical model cannot deal with topological changes without adding extra machinery [21-47]. Second, parametric snakes are too sensitive to their initial conditions due to the nonconvexity of the energy functional and the contraction force which arises from the internal energy term [37,79]. Several works have been done to address the mentioned limitations.

Topological restrictions can be addressed through a two-step approach: firstly, a method of identifying the necessity of a topological operation (split or merge) and secondly, a procedure of performing it. In [21] we found such a methodology that can split a closed snake into two closed parts. This is accomplished by first constructing a histogram of the image force norm along the snake to identify the appropriate region to cut it (region with weakest image field). Next, the method identifies two points in this region to be the end points of the segment which will cut the curve into two parts. The criterion to do this is based on the direction of an area force used to make the contour fit concave parts. This methodology has the disadvantages of not dealing with the contour merges and its extension to the 3D case is very difficult.

In [65] another approach is presented. It seeds particles on the surface of an object until their density on the surface is within some threshold value. Its components are a dynamical particle system and an efficient triangulation scheme which connects the particles into a continuous polygonal surface model consistent with the particles configuration. Particles are oriented; that is, each one has a position and a normal vector associated. The interparticle forces are used to encourage neighboring oriented particles to lie in each other's tangent planes, and therefore favor smooth surfaces. This technique has the advantage of dealing easily with open and closed surfaces. The topology of the particle-based surface can be modified during the triangulation step. However, this has the disadvantages of being expensive (O (N) log N) where N is the number of particles) and that it may be difficult or cumbersome to find good initial seed particle sites, especially automatically [50].

A more general approach to incorporate topological changes in the parametric snake models is the T-snakes model [47-50]. The method embeds the snake model within a framework defined by a simplicial domain decomposition, using classical results in the field of numerical continuation methods [1]. The resulting model has the power of an implicit one without the need for a higher dimensional formulation [46]. Besides, it can be efficiently extended to 3D, generating the T-surfaces model [49].

The sensitivity to the initialization is a very common problem for deformable models. The use of simulated annealing for minimization was proposed in [62]. Despite the global optimization properties, the use of this technique is limited to both its computational complexity and memory requirements.

Levine et al. [44] applied hierarchical filtering methods, as well as a continuation method based on a discrete scale-space representation. At first, a scale-space scheme is used at a coarse scale to get closer to the global energy minimum represented by the desired contour. In further steps, the optimal valley or contour is sought at increasingly finer scales.

These methods address the nonconvexity problembut not the adverse effects of the internal normal force. This force is a contraction force which makes the curve collapse into a point if the external field is not strong enough. In Cohen [18] and Gang et al. [79] this problem is addressed by the addition of another internal force term to reduce the adverse effects of the contraction force. In both works the number of parameters is increased if compared with the original model and there are some trade-offs between efficiency and performance.

Another way to remove the undesired contraction force of the original snake model is to use the concept of invariance, which is well known in the field of computer vision [26, 36]. This concept has been applied to closed contours, and consists in designing an internal smoothing energy, biased toward some prior shape, which has the property of being invariant to scale, rotation, and translation. In these models, the snake has no tendency to expand or contract, but it tends to acquire a natural shape.

An example of a technique, which applies invariance concepts, is the dual active contour (dual ACM) [37]. This approach basically consists of one contour which expands from inside the target feature, and another one which contracts from the outside. The two contours are interlinked to provide a driving force to carry the contours out of local minima, which makes the solution less sensitive to the initial position.

The sensitivity to initialization of snakes can also be addressed by a two-stage approach: (1) The region of interest is limited; and (2) a global minimization technique is used to find the object boundary. Bamford and Lovell [4] describe such a method to segment cell nucleus based on a dynamic programming algorithm (Viterbi algorithm) to find the solution.

The use of dynamic programming (DP) for solving variational problems is discussed by Amini et al. [2]. Unlike the variational approach, DP ensures global optimality of the solution and does not require estimates of higher order derivatives, which improves the numerical stability. However, these techniques are limited by their storage requirements of O (NM2) and computational complexity of O (NM3), where N is the number of snaxels and M is the size of the neighborhood around each snaxel (given a discrete search space with NM points). These performance difficulties can be lowered with a method to reduce the search space. That is the main point addressed in [32,34].

In those works, we propose to reduce the search space through the dual-T-snakes model [30] by its ability to get closer to the desired boundaries. The result is two contours close to the border bounding the search space. Hence, a DP algorithm [2,4,38] can be used more efficiently.

The sensitivity to the initial contour position can also be addressed by a method which initializes automatically the snake closer to the boundaries [43]. An efficient methodology in this field would be worthwhile, not only to save time/calculation, but also to facilitate the specification of parameters, a known problem for snake models [31].

In [29,33] we propose a method to initialize deformable models, which is based on properties related to the topology and spatial scale of the objects in 2D or 3D scenes. We assume some topological and scale properties for the objects of interest. From these constraints we propose a method which first defines a triangulation of the image domain. After that, we take a subsampling of the image field over the grid nodes. This field is thresholded, generating a binary one, an "object characteristic function," from which a rough approximation of the boundary geometry is extracted. This method was extended to 3D in [63].

Neural networks and Hough transforms have also been applied for initialization of deformable models [14,74].

An other possibility to address the sensitivity to initialization is the gradient vector flow, which is a scheme based on a vector diffusion-reaction equation. It was introduced in [77] and can be used to obtain a more efficient image force field [78].

Deformable models can be extended to 3D, generating deformable surface models. Besides the described problems, a new one arises when considering these models: memory utilization.

In general, deformable surface models make use of only the data information along the surface when evolving the model toward the object boundary [48,49]. However, state-of-the-art implementations of these models in general do not account for this fact and fetch the whole volume from disk at the initialization. Such a procedure brings limitations for large size image volumes, mainly if we consider that, in general, deformable models need not only the image intensity but also the image gradient [42,49].

Nowadays, image volumes with 5123 sampling points can be acquired in CT scanners. Besides, other scanning techniques were developed allowing the acquisition of a huge amount of 3D color image volumes (www.nlm.nih. gov/research/visible/visible_human.html). In these cases, the data set information (image intensity and gradient) can be too large to fit in main memory, even if we take the usual cut policy: In a first stage, select a subvolume (a bounding box) that contains the structure of interest, and then segment it. When the size of the data that must be accessed is larger than the size of main memory, some form of virtual memory is simply required, which leads to performance problems [20].

The analysis of large data sets is a known problem in the context of scientific visualization [15,24,71]. Out-of-core techniques have been developed for scalar and vector fields visualization and new proposals are still in progress. Among these methods, out-of-core isosurface extraction techniques are closely related with our work, as we shall see next.

These methods partition the data set into clusters that are stored in disk blocks, and build a data structure to index the blocks for information retrieval (preprocessing step). At run-time, the data structure is read to main memory and traverse to find out the data blocks that must be read to main memory to perform the isosurface generation. The most commonly used data structures, for scientific visualization applications, are the octrees [64,71] and a fc-d-tree-based technique called meta-cell [15].

In [27, 28] we show that the meta-cell technique is the most suitable data structure to perform out-of-core implementations of segmentation methods. We take advantage of the meta-cell method to present an out-of-core implementation of the segmentation approach proposed in [63]. This method is a straightforward extension of the initialization method that we proposed in [26,29].

The core of the algorithm is an out-of-core T-surfaces method based on the meta-cell structure. To our knowledge, it is the first out-of-core algorithm for deformable surface model reported in the literature. Besides, other parametric deformable models as well as implicit models (level sets) and region growing methods can be out-of-core implemented by using the same meta-cell structure (see Section 7.10). It is important to highlight that the proposed structure is useful not only to efficiently swap data between memory and disk, but also to accelerate the segmentation process, as we shall demonstrate (Section 7.9).

To make this text self-contained, some background is offered in Section 7.2. We describe the deformable model methods that will be used in this chapter.

Next, the initialization techniques of interest are described (Section 7.3). We survey the most important works in this subject and show that their basic limitation is that the obtained contour may suffer self-intersections during its evolution. Thus, a deformable model that can deal with such a problem is necessary. T-snakes (or T-surfaces) is a possibility.

Thus, in Section 7.4 we describe an efficient method to initialize the T-surfaces model, which encompasses the basic elements of the segmentation approach presented on Section 7.5. Despite the capabilities of our segmentation approach, we may have problems due to memory limitations for large datasets and poor convergence for noisy images. These problems are considered in Sections 7.6 and 7.8, respectively.

Finally, discussions and perspectives for deformable models in medical images are offered (Section 7.10). Conclusions are given in Section 7.11.

Was this article helpful?

0 0

Post a comment