Hybrid meta-heuristic algorithm based parameter optimization for extreme learning machines classification
Most classification algorithms suffer from manual parameter tuning and it affects the training computational time and accuracy performance. Extreme Learning Machines (ELM) emerged as a fast training machine learning algorithm that eliminates parameter tuning by randomly assigning the input weights a...
Saved in:
Main Author: | |
---|---|
Format: | Thesis |
Language: | English |
Published: |
2021
|
Subjects: | |
Online Access: | http://eprints.utm.my/id/eprint/101549/1/OyekaleAbelAladePSC2021.pdf http://eprints.utm.my/id/eprint/101549/ http://dms.library.utm.my:8080/vital/access/manager/Repository/vital:150572 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Teknologi Malaysia |
Language: | English |
Summary: | Most classification algorithms suffer from manual parameter tuning and it affects the training computational time and accuracy performance. Extreme Learning Machines (ELM) emerged as a fast training machine learning algorithm that eliminates parameter tuning by randomly assigning the input weights and biases, and analytically determining the output weights using Moore Penrose generalized inverse method. However, the randomness assignment, does not guarantee an optimal set of input weights and biases of the hidden neurons. This will lead to ELM instability and local minimum solution. ELM performance also is affected by the network structure especially the number of hidden nodes. Too many hidden neurons will increase the network structure complexity and computational time. While too few hidden neuron numbers will affect the ELM generalization ability and reduce the accuracy. In this study, a heuristic-based ELM (HELM) scheme was designed to secure an optimal ELM structure. The results of HELM were validated with five rule-based hidden neuron selection schemes. Then HELM performance was compared with Support Vector Machine (SVM), k-Nearest Neighbour (KNN), and Classification and Regression Tree (CART) to investigate its relative competitiveness. Secondly, to improve the stability of ELM, the Moth-Flame Optimization algorithm is hybridized with ELM as MFO-ELM. MFO generates moths and optimizes their positions in the search space with a logarithm spiral model to obtain the optimal values of input weights and biases. The optimal weights and biases from the search space were passed into the ELM input space. However, it did not completely solve the problem of been stuck in the local extremum since MFO could not ensure a good balance between the exploration and exploitation of the search space. Thirdly, a co-evolutionary hybrid algorithm of the Cross-Entropy Moth-Flame Optimization Extreme Learning Machines (CEMFO-ELM) scheme was proposed. The hybrid of CE and MFO metaheuristic algorithms ensured a balance of exploration and exploitation in the search space and reduced the possibility of been trapped in the local minima. The performances of these schemes were evaluated on some selected medical datasets from the University of California, Irvine (UCI) machine learning repository, and compared with standard ELM, PSO-ELM, and CSO-ELM. The hybrid MFO-ELM algorithm enhanced the selection of optimal weights and biases for ELM, therefore improved its classification accuracy in a range of 0.4914 - 6.0762%, and up to 8.9390% with the other comparative ELM optimized meta-heuristic algorithms. The convergence curves plot show that the proposed hybrid CEMFO meta-heuristic algorithm ensured a balance between the exploration and exploitation in the search space, thereby improved the stability up to 53.75%. The overall findings showed that the proposed CEMFO-ELM provided better generalization performance on the classification of medical datasets. Thus, CEMFO-ELM is a suitable tool to be used not only in solving medical classification problems but potentially be used in other real-world problems. |
---|