Mutual Information

The minimization of joint entropy H(A, B) has been used for image registration [17,18], but it has been found to be unreliable. The use of this measure involves the implicit assumption that large regions in the two images being aligned should increase their degree of overlap as the images approach registration. If the overlap between large regions of the image is not maximized at registration, then the joint entropy will not be a minimum at registration. As has been already stated, intermodality image registration involves aligning images with very different fields of view. It is likely that the correct alignment will involve only part of each image, i.e., only a proportion of the information in each image. A solution to this problem is to consider the information contributed to the overlapping volume by each image being registered as well with the joint information. The information contributed by the images is simply the entropy of the portion of the image that overlaps with the other image volume:

Here, pA and pB are the marginal probability distributions, which can be thought of as the projection of the joint PDF onto the axes corresponding to intensities in image A and B, respectively. It is important to remember that the marginal entropies are not constant during the registration process. Although the information content of the images being registered is constant (subject to slight changes caused by interpolation during transformation), the information content of the portion of each image that overlaps with the other image will change with each change in estimated registration transformation. Communication theory provides a technique for measuring the joint entropy with respect to the marginal entropies. This measure is known as mutual information I (A, B) and was independently and simultaneously proposed for intermodality medical image registration by researchers in Leuven, Belgium [18,19], and MIT in the United States [1,20]:

not invariant to the overlap between images, though it is better than joint entropy.

The problem has been addressed by proposing various normalised forms of mutual information that are more overlap independent. Three normalization schemes have so far been proposed in journal articles. The following equations were mentioned in passing in the discussion section of Maes ef aZ.

Mutual information can qualitatively be thought of as a measure of how well one image explains the other; it is maximized at the optimal alignment. This measure can be more rigorously described statistically [19]. Considering images A and B once again to be random variables with a joint probability distribution pAB and marginal probability distributions pA and pB, then these variables are statistically independent if pAB(a, b)=pA(a)-pB(b), whereas they are maximally dependent if related by a one-to-one mapping T : pA(a) = pB(T(a)) = Pab(a, T(b)). In this case, mutual information measures the distance between the joint distribution pAB(a, b) and the case of complete independence Pa (a)'pB(b).

4.4 Normalized Mutual Information

The maximizing of mutual information is an appealing voxel similarity measure for intermodality registration both because of its success across several application areas (see Section 6) and because the information theory principals underlying it lend it scientific credibility. There is certainly more scientific notation in papers describing mutual information registration techniques than is used to describe other approaches. It is, therefore, tempting to think of mutual information as being rigorously arrived at in contrast with the heuristic approaches of others. It is probably a mistake to think in these terms. There are some shortcomings of mutual information for intermod-ality image registration, which have led investigators to propose modified "information theoretic" measures that are essentially heuristic, as they do not exist in the communication theory literature and have not yet been shown to have a strong theoretical basis. In communication theory, mutual information is a direct measure of the amount of information passed between transmitter and receiver. During image registration, however, different transformation estimates are evaluated, and these transformation estimates will result in varying degrees of overlap between the two images. In communication theory terms, each transformation estimate results in a different amount of information being transmitted (and received). As a consequence, mutual information as a registration criterion is

Studholme has proposed an alternative normalization devised to overcome the sensitivity of mutual information to change in image overlap [21]:

This version of normalised mutual information has been shown to be considerably more robust than standard mutual information.

Was this article helpful?

0 0

Post a comment