## Expert Combination Framework and Nomenclature

The image to be segmented can be represented as a 1-D array X ={x1,..., xN), where xn is an input feature for the pixel n and N is the total number of pixels in the image. Let the estimate of the segmentation be denoted by array Y = {i/1,..., yN). It is assumed that the number of classes is predetermined from a set of known class labels mi e{1,..., L}, and therefore, the estimated class label of pixel n is indicated as yn = ml.

We assume that there are R image segmentation experts, where the rth expert provides a segmentation decision for a given pixel feature xn from a set of learnt parameter vectors 0r. Using a WGMM expert, the parameter vector 0r of each expert is defined as a set of component mixing coefficients pi (m), means film, and covariances £im from each of the M component Gaussians, m e{1,..., M}, for each class mi e {1,..., L}. On segmentation of an image, the rth expert provides an estimate of the a posteriori probability of a feature vector associated with a pixel xn, belonging to a given class mi as pOn = a>i | xn, 0r), for Vn = (1,..., N). In order to combine the decisions of different experts, the joint probability of all segmentation decisions is required. Using the Bayes rule, the combined a posteriori probability can be computed from the segmentation experts for class mi as follows:

where p(mi) is the prior probability (assumed to be set equally for all classes as 1 / R) for each class mi, and p(xn, Oi,..., OR) is the unconditional joint probability defined as

p(Xn, O1,...,Or) = p(y = mk | Xn,01,..., Or) p(mk) (11.27)

On the basis of this nomenclature and equal priors from each class, in the following two sections we detail the "ensemble-based combination rules" (section 4.4.2), and then propose a novel strategy for combining results, called "adaptive weighted model (AWM)" (section 4.4.3)

## Post a comment