Clustering Methods

Clustering is a natural way for image segmentation since partitions of similar intensity or texture can be seen as different clusters, the same way human beings perceive objects. Let Xj, i = 1,..., N, be a sample of the input space, and let Cj c C, j = 1,..., M, be one class of a total of M classes. A clustering algorithm determines the classes C and assigns every sample x into one of the classes. For hard clustering, a sample belongs to only one class, meaning 0 Cj = Vk = j. For fuzzy clustering, a sample can be classified into more than one class with different membership values (a degree of similarity) [11]. The sum of all membership values of one sample is unity. Categorization and summary of most clustering techniques can be found in [12, 18, 19].

k-means (or c-means) and its fuzzy version FCM are two well-known classical clustering algorithms used for image segmentation. A comparative study of k-means and FCM is presented in [20]. Application of these algorithms and their variations on image segmentation can be found in [21-25].

The clustering techniques discussed below, namely, AFLC, DA, k-means, and FCM, can be regarded as optimization processes that seek to reduce mis-classification by minimizing specific cost functions or system energy functions. Contrary to the classical Bayesian classifier that needs training, these clustering techniques are unsupervised. The complexity of these algorithms, however, varies. k-means and FCM are relatively simple and easier to implement but not as effective when compared to DA and AFLC, as will be demonstrated in the next section. The main problem inherent to both k-means and FCM is that the initial guess of the actual number of clusters present in a dataset is crucial to the convergence of the algorithms.

0 0

Post a comment