Markovian Based Segmentation Method

The algorithm consisted of running the pixel classification approach using Markov random field with mean field (see Zhang [149]). Here, the image segmentation was posed as a classification problem where each pixel is assigned to one of K image classes. Suppose the input image was y = (yij j, (i, j) e L}, where yi,j is a pixel, i.e., a 3-D vector, and L was a square lattice. Denote the segmentation as z = {zi,j, (i, j) e L}. Here, zi, j is a binary indicator vector of dimension K, with only one component being 1 and the others being 0. For

M

Qi

cn

w^m

m

'o

ol

m 1

3\

Figure 9.9: Normal ground truth overlays. The top row in each row pair is the left carotid artery overlayed with the ground truth tracing of the inner lumen wall, and the bottom row is the corresponding image of the right carotid artery. Some lumens have a circular shape, while others have an elliptical shape.

example, when K = 3, zijj = [0,1, 0]T means we assign the pixel at (i, j) to class 2.

Using the notation introduced above, the segmentation problem can be formulated as the following MAP (maximum a posteriori) inference problem:

where $ and © were model parameters. In this work, we assume that the pixels in y are conditionally independent given z, i.e., log p(y | z, $) = J2 log p(yi,j | zi, j, $). (9.2)

Furthermore, we assume that conditioned on zi, j, the pixel yi, j has a multivariate Gaussian density, i.e., for k = 1, 2,...,K, p(y--| z---ek 2(yi j ~ mk)T C-1(yi j ~ mk) (93) p(yi'J 1 ^ = ek,$) = (2* )3/2|Ck|1/2 ' (9.3)

where ek is a K-dimensional binary indicator vector with the kth component being 1. From this, $ = {mk, Ck}Kx contained the mean vectors and covariance matrices for the K image classes. For z, we have adopted an MRF model with a Gibbs' distribution [149]:

where

i,j (k,l)eNi,j is the energy function which decreased (causing p(z | ©) to increase) when neighboring pixels were classified into the same class (the set of the neighbors of (i, j) is denoted as Ni, j).

Since © was generally not sensitive to particular images, it was set manually here. $, on the other hand, was directly dependent on the input image and hence had to be estimated during the segmentation process. In this work, as it in [149], this was achieved by using the EM algorithm [177], which amounts to iterating between the following two steps:

1. E step: Compute

Q($ | $(p)) = (log p(y | z, $) + log p(z | ©) | y, $(p)>.

2. M step: Update parameter estimate

Here (•} represents the expectation, or mean, and the superscript p denotes the pth iteration. This translated into the following formulas for updating the parameter estimates:

where k = 1, 2, ...,K, Zj is the kth component of zi, j ,and f (z^j) a "mean field" probability distribution (see Zhang [149]).

These formulas, in addition to providing the estimate of $, also produced a

segmentation. Specifically, at each iteration, (zj} was interpreted as the probability that yi,j was assigned to class k. Hence, after a sufficient number of iterations, we can obtain the segmentation z for each (i, j) e L by

In this way, the EM procedure described above generated the segmentation as a by-product, and therefore provided an alternative to the MAP solution of Eq. (9.1) For those interested in the application of MRF in medical imaging, see Kapur

[150], who recently developed a brain segmentation technique which was an extension to the EM work of Wells et al. [174] by adding the Gibbs' model to the spatial structure of the tissues in conjunction with a mean-field (MF) solution technique, called Markov random field (MRF) technique (for details on MRF, see Li

[151] and Geman and Geman [179]). Thus the technique was named expectation maximization-mean field (EM-MF) technique. By Gibbs' modeling of the homogeneity of the tissue, resistance to thermal noise in the images was obtained. The image data and intensity correction were coupled by an external field to an Ising-like tissue model, and the MF equations were used to obtain posterior estimates of tissue probabilities. This method was more general to the EM-based method and is computationally simple and an inexpensive relaxation-like update. For other work in the area of MRF for brain segmentation see Held et al. [152].

Pros and Cons of MRF with Scale Space. The major advantages of MRF-based classification is (1) addition of Gibbs' Model: Kapur et al. model the a priori assumptions about the hidden variables as a Gibbs' random field. Thus the prior probability is modeled using the following physics-based analogous Gibbs'

equation: P(f) = Z exp(--T)E(f)), where Z = £f, exp(-ETf1)and P(f)isthe probability of the configuration f, T is the temperature of the system, k is the Boltzmann constant, and Z is the normalizing constant. The major disadvantages of MRF-based classification include the following: (1) The computation time would be large if the number of classes is large. In such cases, one needs to use the multiresolution techniques to speed up the computation. (2) The positions of the initial clusters are critical in solving the MRF model for the convergence. Here, the initial cluster was computed using K-means algorithm which was a good starting point. However, a more robust method is desirable.

9.3.1.1 Implementation of MRF System

Figure 9.10 shows the main MRF classification system implementation. Input is a perturbed gray scale image with left and right lumens. The number of classes, the initial mean, and the error threshold are inputs given to the system. The result is a classified image with multiple class regions in it, including multiple classes in the lumen region. Figure 9.11 shows the MRF system in more detail. Given the initial center, the number of classes K, the Markov parameters of mean, variance, and covariance, and the perturbed image, the current cluster center is calculated. Using the EM algorithm, new parameters are solved and new cluster

Figure 9.10: Markov random fields (MRF) classification process. Input is a gray scale perturbed image with left and right lumens. The number of classes, the initial mean, and the error threshold are inputs given to the system. The result is a classified image with multiple class regions in it.

Figure 9.11: The Markov random fields (MRF) segmentation system. Given the initial center, the number of classes K, the Markov parameters of mean, variance, and covariance, and the perturbed image, the current cluster center is calculated. Using the expectation-maximization (EM) algorithm, new parameters are solved and new cluster centers are computed. The error between the previous cluster center and the recently calculated cluster center are compared, and the process is repeated if the error is not less than the error threshold. After the iterative process is finished, the output is a segmented image.

Figure 9.11: The Markov random fields (MRF) segmentation system. Given the initial center, the number of classes K, the Markov parameters of mean, variance, and covariance, and the perturbed image, the current cluster center is calculated. Using the expectation-maximization (EM) algorithm, new parameters are solved and new cluster centers are computed. The error between the previous cluster center and the recently calculated cluster center are compared, and the process is repeated if the error is not less than the error threshold. After the iterative process is finished, the output is a segmented image.

centers are computed. The error between the previous cluster center and the recently calculated cluster center are compared, and the process is repeated if the error is not less than the error threshold. After the iterative process is finished, the output is a segmented image.

0 0

Post a comment