Mfcn = {UeMpn : Uk eNfcVk} Mhm = {UeMfm : Uk e NhcVk}.

Nf 3 = conv(Nh3) FIGURE 4 Label vectors for c = 3 classes.

Equations (4), (5), and (6) define, respectively, the possibilistic, fuzzy (or probabilistic, if in a statistical context), and crisp c-partitions of X, with Mhcn c Mj-cn c Mpcn. Crisp partitions have an equivalent set-theoretic characterization: {X, •••, Xc} partitions X when X{ h Xj = Vi = j and X = uX;. Soft partitions are noncrisp ones. Since definite class assignments (tissue types in medical images) for each pixel or window are the usual goal in image segmentation, soft labels y are often transformed into crisp labels H (y) using the hardening function

H(y) = «i ^ \\y - «iii < lly - «jii ^ n > y; j = »• (7)

on , and ties are broken arbitrarily. H finds the crisp label vector «i closest to y by finding the maximum coordinate of y, and assigning the corresponding crisp label to the object z that y labels. When z is a noncrisp label, H(y) is called hardening of y by the maximum membership principle. For example, for the fuzzy label y = (0.37,0.44, 0.10,0.09)T, H (y) = (0,1,0,0)T .If the four tissue classes represented by the crisp labels are 1 = bone, 2 = fat, 3 = gray matter, and 4 = white matter, the vector y indicates that whatever it labels (here, an image pixel) is slightly more similar to fat than the other three tissues, and if we had to have a definite tissue assigned to this pixel, it would be fat. However, this is an example of a very "fuzzy" label; we would have much more confidence that the correct tissue was chosen with a label vector such as y = (0.03,0.84,0.05,0.08)T. The function in (7) is not the only way to harden soft labels. This is an important operation when images are segmented with soft models, because clinicians want to see "this region is white matter," not "here is a fuzzy distribution of labels over the region." Table 1 contains a crisp, fuzzy, and possibilistic partition of n = 3 objects into c = 2 classes. The nectarine, x3, is labeled by the last column of each partition, and in the crisp case, it must be (erroneously) given full membership in one of the two crisp subsets partitioning this data. In U1, x3 is labeled "plum."

Noncrisp partitions enable models to (sometimes!) avoid such mistakes. The last column of U2 allocates most (0.6) of the membership of x3 to the plums class, but also assigns a lesser membership (0.4) to x3 as a peach. U3 illustrates possibilistic label assignments for the objects in each class.

To see the relevance of this example to medical images, imagine the classes to be peaches "people with disease A" and plums — "people with disease B," that x1, x2 and x3 are (images of) patients, and the columns beneath them represent the extent to which each patient is similar to people with diseases A and B. From U1-U3 we infer that patient 1 definitely has disease A. On the other hand, only U1 asserts that patients 2 and 3 do not; their labels in the second and third partitions leave room for doubt—in other words, more tests are needed. All clinicians know about

TABLE 1 Typical 2-partitions of X = {xl = peach, x2 = plum, x3 = nectarine}

Object x1 x2 x3 x1

x2 x3 x1 x2 x3

0 0

Post a comment