is typically used as an activation function for each of the processing neurons in the ANN, where Opj is the jth element of the output pattern produced by the input pattern Opi. Then, with the use of the back-propagation concept and Widow/Hoff learning law, the weights between pairs of neurons are adjusted iteratively so that the difference between the actual output values and the desired output values is minimized. Initially, the weights are randomly assigned. Then, the adjusted weights are calculated as

where ^ is the learning rate, a is the momentum term used to determine the effect of past weight changes on the current changes, k is the number of iterations, and Spj is the error between the desired and actual ANN output values. All the final weights in the ANN can be determined when either error Spj is smaller than a predetermined value (e.g., 0.001) or training iteration number k has reached a predetermined number (e.g., 3000). However, setting a fixed threshold for error Spj or iteration number k cannot guarantee that the ANN is optimally trained. A safer approach based on available training and

0 0

Post a comment