Dynamic unsupervised feedforward neural network clustering / Roya Asadi

Artificial neural networks are computational models inspired by neurobiology for enhancing and testing computational analogues of neurons. In a feedforward neural network (FFNN), data processing occurs in only one forward interconnection from the input layer to the output layer without any backward...

Full description

Saved in:
Bibliographic Details
Main Author: Roya, Asadi
Format: Thesis
Published: 2016
Subjects:
Online Access:http://studentsrepo.um.edu.my/9295/1/Roya_Asadi.pdf
http://studentsrepo.um.edu.my/9295/6/roya.pdf
http://studentsrepo.um.edu.my/9295/
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Universiti Malaya
Description
Summary:Artificial neural networks are computational models inspired by neurobiology for enhancing and testing computational analogues of neurons. In a feedforward neural network (FFNN), data processing occurs in only one forward interconnection from the input layer to the output layer without any backward loop. Unsupervised FFNN (UFFNN) clustering has great capabilities such as inherent distributed parallel processing architectures, adjusting the interconnection weights to learn and divide data into meaningful groups with special goals, classifying related data into similar groups without using any class label, controlling noisy data and learning the types of input data values based on their weights and properties. Generally in real environments, dynamic data is high volume and dimensional, therefore, the online dynamic UFFNN (ODUFFNN) clustering methods should be developed to have online incremental learning capability. Incremental learning refers to the ability of repeatedly training a network using new data or deleting unnecessary data, without destroying outdated prototype patterns. The ODUFFNN should also be compatible with the changes that occur in continuous data and should be able to control noisy data. We reviewed and investigated current ODUFFNN clustering methods and identified their limitations, main problems such as high training time, low accuracy and high time complexity and memory complexity of clustering, and some reasons of these problems. In order to overcome the problems, we developed a dynamic UFFNN (DUFFNN) clustering model with only one epoch training. Dynamically after each entrance of the online input data, the DUFFNN learns and stores important information about the current online data, such as the non-random weights and consequently completes a codebook of the weights. Then, a unique and standard weight vector is extracted and updated from the codebook. Subsequently, a single layer DUFFNN calculates the exclusive distance threshold of each data based on the standard weight vector, and clusters the data based on the exclusive distance threshold. Based on the literature, after learning in order to improve the quality of the DUFFNN clustering result, the model assigns a class label to the input data through the training data. The class label of each initially unlabeled input data is predicted by considering a linear activation function and the exclusive distance threshold. Finally, the number of clusters and the density of each cluster are updated. For evaluation purposes, the clustering performances of the DUFFNN were compared with several related clustering methods using the various datasets from the University of California at Irvine Machine Learning Repository, which illustrated great results. For example, the accuracy of the proposed model was measured through the number of clusters, the quantity of corectly classified nodes and also an F-measure which was 97.71% of the Breast Cancer, 97.24% of Iris, 73.41% of Spam, 90.52% of SPECT Heart, 86.62% of SPECTF Heart, 52.57% of Musk1, 84.31% of Musk2, 66.07% of Arcene, and 27.25% of Yeast datasets respectively, and the superior F-measure results between 98.14% and 100% accuracies for the breast cancer dataset from the University of Malaya Medical Center to predict the survival time of the patients.