Convergence of Deformable Models and Diffusion Methods

Despite the capabilities of the segmentation approach in Section 7.5, the projection of T-surfaces can lower the precision of the final result. Following [49], when T-surfaces stops, we can discard the grid and evolve the model without it avoiding errors due to the projections.

However, for noisy images the convergence of deformable models to the boundaries is poor due to the nonconvexity of the image energy. This problem can be addressed through diffusion techniques [18,44,52].

In image processing, the utilization of diffusion schemes is a common practice. Gaussian blurring is the most widely known. Other approaches are the anisotropic diffusion [52] and the gradient vector flow [77].

From the viewpoint of deformable models, these methods can be used to improve the convergence to the desired boundary. In the following, we summarize these methods and conjecture their unification.

Anisotropic diffusion is defined by the following general equation:

where I is a gray-level image [52].

In this method, the blurring on parts with high gradient can be made much smaller than in the rest of the image. To show this property, we follow Perona et al. [52]. Firstly, we suppose that the edge points are oriented in the x direction. Thus, Eq. (7.29) becomes:

If c is a function of the image gradient: c(x, y, t) = g(Ix(x, y, t)), we can define *(Ix) = g(Ix) ■ Ix and then rewrite Eq. (7.29) as:

We are interested in the time variation of the slope: dt. If c(x, y, t) > 0 we can change the order of differentiation and with a simple algebra demonstrate that:

At edge points we have Ixx = 0 and Ixxx ^ 0 as these points are local maxima of the image gradient intensity. Thus, there is a neighborhood of the edge point in which the derivative dIx/d t has sign opposite to * (Ix). If * (Ix) > 0 the slope of the edge point decrease in time. Otherwise it increases, that means, border becomes sharper. So, the diffusion scheme given by Eq. (7.29) allows to blur small discontinuities and to enhance the stronger ones. In this work, we have used 0 as follows:

as shall see next.

In the above scheme, I is a scalar field. For vector fields, a useful diffusion scheme is the gradient vector flow (GVF). It was introduced in [77] and can be defined through the following equation [78]:

where f is a function of the image gradient (for example, P in Eq. (7.13)), and g(x), h(x) are non-negative functions defined on the image domain.

The field obtained by solving the above equation is a smooth version of the original one which tends to be extended very far away from the object boundaries. When used as an external force for deformable models, it makes the methods less sensitive to initialization [77] and improves their convergence to the object boundaries.

As the result of steps (1)-(6) in Section 7.5 is in general close to the target, we could apply this method to push the model toward the boundary when the grid is turned off. However, for noisy images, some kind of diffusion (smoothing) must be used before applying GVF. Gaussian diffusion has been used [77] but precision may be lost due to the nonselective blurring [52].

The anisotropic diffusion scheme presented above is an alternative smoothing method that can be used. Such observation points forward the possibility of integrating anisotropic diffusion and the GVF in a unified framework. A straightforward way of doing this is allowing g and h to be dependent upon the vector field u. The key idea would be to combine the selective smoothing of anisotropic diffusion with the diffusion of the initial field obtained by GVF. Besides, we expect to get a more stable numerical scheme for noisy images.

Diffusion methods can be extended for color images. In [56, 57] such a theory is developed. In what follows we summarize some results in this subject.

Firstly, the definition of edges for multivalued images is presented [57]. Let $(ui, u2, u3) : D c ^ be a multivalued image. The difference of image values at two points P = (u1; u2, u3) and Q = (u1 + du1, u2 + du2, u3 + du3) is

= > —rdu1 ^ d$2 = > > —r, —: duiduj, (7.34)

where d$2 is the square Euclidean norm of d$. The matrix composed of the

coefficients gj = { —, ) is symmetric, and the extremes of the quadratic form d$2 are obtained in the directions of the eigenvectors (&+,&-) of the metric tensor [g^], and the values attained there are the corresponding maximum/minimum eigenvalues (A.+, X-). Hence, a potential function can be defined as [57]:

which recovers the usual edge definition for gray-level images: (A.+ = ||VIf,X- = 0 if m = 1).

Similarly to the gray-level case, noise should be removed before the edge map computation. This can be done as follows [56,57]. Given the directions 0±, we can derive the corresponding anisotropic diffusion by observing that diffusion occurs normal to the direction of maximal change 0+, which is given by Q-. Thus, we obtain:

which means:

9$1 d 2$1 d$m d2$m dt d0- ' " '' dt d0-In order to obtain control over local diffusion, a factor gcolor is added:

where gcolor can be a decreasing function of the difference (A.+ - A._).

This work does not separate the vector into its direction (chromaticity) and magnitude (brightness).

In [67], Tang et al. pointed out that, although good results have been reported, chromaticity is not always well preserved and color artifacts are frequently observed when using such a method. They proposed another diffusion scheme to address this problem. The method is based on separating the color image $ into chromaticity and brightness, and then processing each one of these components with proper diffusion flows. By doing this, the following multiscale representation is proposed for 2D images, which can be straightforwardly extended to 3D. Let B : D c x ^ and C : D c x ^ Sm—1, the image brightness and chromaticity, respectively ((S™—1) being the (m — 1)-dimensional unit sphere), such that:

Was this article helpful?

0 0

Post a comment