Extensions to the K-AMH algorithm for numerical clustering

The k-AMH algorithm has been proven efficient in clustering categorical datasets. It can also be used to cluster numerical values with minimum modification to the original algorithm. In this paper, we present two algorithms that extend the k-AMH algorithm to the clustering of numerical values. The o...

Full description

Saved in:
Bibliographic Details
Main Authors: Seman, Ali, Mohd Sapawi, Azizian
Format: Article
Language:English
Published: Universiti Utara Malaysia 2018
Subjects:
Online Access:http://repo.uum.edu.my/24940/1/JICT%2017%204%202018%20587%20599.pdf
http://repo.uum.edu.my/24940/
http://jict.uum.edu.my/index.php/current-issues-1#a
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Universiti Utara Malaysia
Language: English
Description
Summary:The k-AMH algorithm has been proven efficient in clustering categorical datasets. It can also be used to cluster numerical values with minimum modification to the original algorithm. In this paper, we present two algorithms that extend the k-AMH algorithm to the clustering of numerical values. The original k-AMH algorithm for categorical values uses a simple matching dissimilarity measure, but for numerical values it uses Euclidean distance. The first extension to the k-AMH algorithm, denoted k-AMH Numeric I, enables it to cluster numerical values in a fashion similar to k-AMH for categorical data. The second extension, k-AMH Numeric II, adopts the cost function of the fuzzy k-Means algorithm together with Euclidean distance, and has demonstrated performance similar to that of k-AMH Numeric I. The clustering performance of the two algorithms was evaluated on six real-world datasets against a benchmark algorithm, the fuzzy k-Means algorithm. The results obtained indicate that the two algorithms are as efficient as the fuzzy k-Means algorithm when clustering numerical values. Further, on an ANOVA test, k-AMH Numeric I obtained the highest accuracy score of 0.69 for the six datasets combined with p-value less than 0.01, indicating a 95% confidence level. The experimental results prove that the k-AMH Numeric I and k-AMH Numeric II algorithms can be effectively used for numerical clustering. The significance of this study lies in that the k-AMH numeric algorithms have been demonstrated as potential solutions for clustering numerical objects.