f (7j ; ^k, n) = E nkfk (yj ; ^, °ï), (!) k=l where yj is the intensity of voxel j, and fk is the Gaussian distribution parameterized by a mean and variance o^. The variables nk are mixing coefficients that weight the contribution of each density function. Gaussian classifiers require training data similar to the kNN classifier implementation . Using the training data, the parameters o, and n are estimated for each class. Gaussian clustering is an unsupervised technique in which no interactive training of the data is performed , and it is typically implemented using the expectation maximization algorithm . A voxel is then classified into the class that yields the highest posterior probability. A voxel may be left unclassified if it is located further than a predetermined number of standard deviations from all the class centers.
Was this article helpful?