## Partial Volume Classification Approach Using Voxel Histograms

Bayesian probability theory can be used to estimate the highest-probability combination of materials within each voxel-sized region. The estimation is based on the histogram of data values within the region. The posterior probability, which is maximized, is based on conditional and prior probabilities derived from the assumptions about what is being measured and how the measurement process works [3]. With this information the materials contained within each voxel can be identified based on the sample values for the voxel and its neighbors. Each voxel is treated as a region (see Fig. 3), not as a single point. The sampling theorem [4] allows the reconstruction of a continuous function, p(x), from the samples. All of the values that p(x) takes on within a voxel are then represent by a histogram of p(x) taken over the voxel. Figure 4a shows samples; Fig. 4b shows the function p(x) reconstructed from the samples; and Fig. 4c shows a continuous histogram calculated from P(x).

Each voxel is assumed to be a mixture of materials, with mixtures occurring where partial-volume effects occur, i.e., where the band-limiting process blurs measurements of pure materials together. From this assumption, basis functions are derived that model histograms of voxels containing a pure material and of voxels containing a mixture of two materials. Linear combinations of these basis histograms are fit to each

shce o!

FIGURE 3 Definitions: A sample is a scalar or vector valued element of a 2D or 3D dataset; a voxel is the region surrounding a sample.

shce o!

volume ti.Hiian!

FIGURE 3 Definitions: A sample is a scalar or vector valued element of a 2D or 3D dataset; a voxel is the region surrounding a sample.

voxel, and the most likely combination of materials is chosen probabilistically.

The regions that are classified could be smaller or larger than voxels. Smaller regions would include less information, and so the context for the classification would be reduced and accuracy would suffer. Larger regions would contain more complicated geometry because the features that could be represented would be smaller than the region. Again, accuracy would suffer. Because the spacing of sample values is intrinsically related to the minimum feature size that the reconstructed continuous function, p(x), can represent, that spacing is a natural choice for the size of regions to be classified.

'o^tufe sumo

FIGURE 4 Continuous histograms. The scalar data in (a) and (b) represent measurements from a dataset containing two materials, A & B, as shown in Fig. 6. One material has measurement values near vA and the other near vB. These values correspond to the Gaussian-shaped peaks centered around vA and vB in the histograms, which are shown on their sides to emphasize the axis that they share. This shared axis is "feature space".

histogram

FIGURE 2 One slice of data from a human brain. (a) The original two-valued MRI data. (b) Four of the identified materials, white matter, gray matter, cerebrospinal fluid, and muscle, separated out into separate images. (c) Overlaid results of the new classification mapped to different colors. Note the smooth boundaries where materials meet and the much lower incidence of misclassified samples than in Fig. 5. See also Plate 16.

FIGURE 4 Continuous histograms. The scalar data in (a) and (b) represent measurements from a dataset containing two materials, A & B, as shown in Fig. 6. One material has measurement values near vA and the other near vB. These values correspond to the Gaussian-shaped peaks centered around vA and vB in the histograms, which are shown on their sides to emphasize the axis that they share. This shared axis is "feature space".

## Post a comment