KNearest Neighbors

Voting k-nearest neighbors classification procedure is a very popular classification scheme that does not rely on any assumption concerning the structure of the underlying density function.

As any nonparametric technique, the resulting classification error is the smallest achievable error given a set of data. This is true because this technique implicitly estimates the density function of the data, and therefore, the classifier becomes the Bayes classifier if the density estimates converge to the true densities when an infinite number of samples are used [38].

In order to classify a test sample X, the k-nearest neighbors to the test sample are selected from the overall training data, and the number of neighbors from each class wi among the k selected samples is counted. The test sample is then classified to the class represented by a majority of the k-nearest neighbors. That is kj = max{k1 ■ ■ ■ kL} ^ X e wj k1 + ••• + kL = k where kj is the number of neighbors from class wj, (j = 1,...,L) among the selected neighbors. Usually, the same metric is used to measure the distance to samples of each class.

Figure 2.13 shows an example of a 5-nearest neighbors process. Sample X will be classified as member of the light gray class since there are 3-nearest neighbors of the black class while there are only 2 members of the white class.

5-Nearest Neighbors o

Figure 2.13: A 5-nearest neighbors example.

0 0

Post a comment