N

According to the hypothesis of Donald Hebb [12], it rewards correlated pre- and postsynaptic activation by a growth term As^ — yiaj(x). The decay term As^ ~ s^aj(x) acts as a simple means to avoid explosive weight growth.

In contrast to (31) the local update rule (34) only requires information of neurons that are directly coupled to each other by the corresponding synaptic weight. Thus, (34) is inspired by the idea of localized biological information processing.

If we again assume the convergence of the procedure, i.e., Asj = 0, summation over all x e X yields

If we introduce the class label information by y\ = S^x), the final weights can be calculated by

Together with the normalization of the hidden layer activations according to (19), this also results in a normalization of the output activations yx(x), and thus (also because of aj(x) > 0 and sxj > 0) enables a probabilistic interpretation of the classification properties of the GRBF network. This can be explained as follows: According to (19), we can interprete the hidden layer activations aj (x) as conditioned assignment probabilities p(j |x) of the feature vector x to the codebook vector j, and, with regard to (37), the s^ as the conditioned assignment probability p(A|j) of class X to the codebook vector j. Linear signal propagation then yields the a posteriori probability of class X at a given feature vector x e X:

Thus, the GRBF perceptron with the local learning rule (34) for the output weights represents a Bayes classifier that does not require the computation of the corresponding a posteriori probabilities from known a priori probabilities by Bayes' rule (22).

Unfortunately the local training procedure (34) is inferior to the global training procedure (31) with respect to the classification performance of the total network (see, e.g., [2]). For this reason, the global training procedure is recommended. Its application to tissue classification is presented in Section 8.2.

0 0

Post a comment