## Model Based Initialization

The previously described model-based approaches employing statistical encoding of large organ populations can also be successfully applied to efficient initialization of interactive methods [61]. The underlying idea is to apply statistical shape analysis for examining the remaining variability of shape due to interactive point-wise subtraction of variation. The key element is the optimal selection of principal landmarks that carry as much shape information as possible. The goal is to remove as much variation as possible by selecting points that have a maximal reduction potential. The overall process will be described below, considering the previously mentioned population of 71 hand segmented corpus callosi.

Similar to the automatic approach, the first step is the generation of a compact statistical shape description of all object instances in the database. First, we calculate the mean shape p and the instance specific difference vector Api = Pi — p.

To find the eigensystem of our data, the difference vectors are projected into a lower dimensional space whose basis M is constructed by the Gram-Schmidt orthonormalization x :

M = [mx,..., mw_x] = x (Apx,..., Apw_x) Api = MT Api (14.28)

The covariance matrix £ and the resulting PCA given by the eigensystem of £ can subsequently be calculated according to:

£ = ^TT J2 Api ApT P=A UAUt a = diag(^1,..., ^-1) (14.29)

The principal components defining the eigenmodes in shape space are then given by back projecting the eigenvectors U:

14.5.2.1 Point-wise Subtraction of Variation

After the statistical analysis of the anatomical shape, this information can be used to progressively eliminate variation by point-wise fixation of control points. After defining the coordinate system with the AC-PC line, the initialization starts with the average model p (Fig. 14.13(a)). Additional boundary conditions are then introduced by moving control vertices to approximately correct positions

(a) Initial average model and correct seg- (b) Basis vectors R

mentation

Figure 14.13: (a) Boundary conditions for an initial outline are established by prescribing a position for each coarse control vertex. (b) Shape variations caused by adding the basis vectors defining the x- and ^-translation of one point to the average model. The various shapes are obtained by evaluating p + rnUrk with rn e {-2,..., 2} and k e [xj, yj}.

(a) Initial average model and correct seg- (b) Basis vectors R

mentation

Figure 14.13: (a) Boundary conditions for an initial outline are established by prescribing a position for each coarse control vertex. (b) Shape variations caused by adding the basis vectors defining the x- and ^-translation of one point to the average model. The various shapes are obtained by evaluating p + rnUrk with rn e {-2,..., 2} and k e [xj, yj}.

on the object border. In the next step, given the a priori shape knowledge and these constraints, the most natural initialization outline should be chosen. In the context of PCA, this means choosing the model with minimal Mahalanobis distance Dm.

The solution to this task is to find two vectors in variation space describing decoupled x- and ^-translations of a given point j in object space with minimal overall variations. Once these vectors are found, all possible boundary conditions can be satisfied by adding these appropriately weighted vectors to the mean shape.

Let rxj and ryj denote the two unknown basis vectors causing unit x- and y-translation of the point j respectively. The Dm of these two vectors is then given by n-1

Taking into account that Xj and yj depend only on two rows of U, we define the submatrix Uj according to the following expression:

xj |
_ |
xj |
+ |
U2 j-1o |
b— |
xj |

.yj. |
.yj. |
U2 j o |
.yj. |

In order to minimize Dm subject to the constraint of a separate x- or ^-translation by one unit, we establish the Lagrange function L :

L(rk, lk) — --lT[Ujrk - ek], k e [xj, yj} eXj —

The vectors lXj and lyj denote the required Lagrange multipliers. To find the optimum of L(rk, lk), we calculate the derivatives with respect to all elements of rx., ry., lx., and ly. and set them equal to zero:

In-1

If the basis vectors and the Lagrange multipliers are combined according to Rj = [rx. ryj ] andLj = [lXj lyj ], Eq. (14.34') can be rewritten as two linear matrix equations:

The two basis vectors rXj and ryj (resulting from simple algebraic operations on Eqs. (14.35) and (14.36)) are then given by

While rXj describes the translation of Xj by one unit with constant yj and minimal shape variation, ryj alters yj correspondingly. The resulting effect caused by adding these shape-based basis vectors to the average model is illustrated in Fig. 14.13(b). The most probable shape p given the displacement [Axj, Ayj]T for the control vertex j is consequently determined by p = p + URj

After obtaining the most probable shape for a given control vertex, we now have to ensure that subsequent modifications do not alter the adjusted vertex. Therefore, we remove the components from the statistic that cause a displacement of the point. The first step is to subtract the basis vectors Rj, weighted by the example specific displacement Uj = [Axj, Ayj]J, from the parameter representation bj of each instance i:

Doing so for all instances, we obtain a new description of our population bj which is invariant with respect to point j. An example of the removal of the variation is visualized in Fig. 14.14.

In order to further improve the point-wise elimination process, the control point selection strategy has to be optimized. This can be done by choosing control vertices, or principal landmarks, which carry as much shape information as possible.

We define the reduction potential of a vertex jk being a candidate to serve as the kth principal landmark by

P (jk) = — J2 (5?)Sk = -^Sk) = —tr(ASk), (14.40)

with sequence sk = {j1,..., jk} denoting the set of the k point-indices of the principal landmarks that have been removed from the statistic in the given order, and the superscript oSk indicating the value of o if the principal landmarks sk have been removed.

In order to remove as much variation as possible, we choose consequently that point as the first principal landmark that holds the largest reduction potential: j = max[P(j)]. This selection strategy was applied to obtain the j eigenmodes shown in Fig. 14.14. Further application of the selection strategy

(a) Two-point invariant eigenmodes (b) Three-point invariant

Figure 14.15: Remaining variability after vertex elimination of (a) two and (b) three principal landmarks.

(a) Two-point invariant eigenmodes (b) Three-point invariant

Figure 14.15: Remaining variability after vertex elimination of (a) two and (b) three principal landmarks.

to the example, obtains the optimal second and third principal landmark (Fig. 14.15).

### 14.5.2.2 Initialization Process

The described framework can now be used for efficient initialization of de-formable models. Examples of the initialization process are shown in Fig. 14.16. The left image shows how the initial average model converges toward a sound approximation by adding control vertices. The right image depicts four additional examples with adjusted principle landmarks. Generally speaking,

(a) Progressive initialization (b) Four different initializations

Figure 14.16: (a) Generation of an initial outline for segmentation. Shape instance in black and fitted initializations in gray with an increasing number of fitted principal landmarks. (b) Initial shapes with four adjusted principal landmarks for the segmentation of four randomly chosen instances.

(a) Progressive initialization (b) Four different initializations

Figure 14.16: (a) Generation of an initial outline for segmentation. Shape instance in black and fitted initializations in gray with an increasing number of fitted principal landmarks. (b) Initial shapes with four adjusted principal landmarks for the segmentation of four randomly chosen instances.

selecting three to four landmarks has proven to be sufficient for a reasonably good initialization.

## Post a comment