A novel reformed reduced kernel extreme learning machine with RELIEF-F for classification
With the exponential growth of the Internet population, scientists and researchers face the large-scale data for processing. However, the traditional algorithms, due to their complex computation, are not suitable for the large-scale data, although they play a vital role in dealing with large-scale d...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Published: |
Hindawi
2022
|
Subjects: | |
Online Access: | http://eprints.um.edu.my/42962/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Malaya |
Summary: | With the exponential growth of the Internet population, scientists and researchers face the large-scale data for processing. However, the traditional algorithms, due to their complex computation, are not suitable for the large-scale data, although they play a vital role in dealing with large-scale data for classification and regression. One of these variants, which is called Reduced Kernel Extreme Learning Machine (Reduced-KELM), is widely used in the classification task and attracts attention from researchers due to its superior performance. However, it still has limitations, such as instability of prediction because of the random selection and the redundant training samples and features because of large-scaled input data. This study proposes a novel model called Reformed Reduced Kernel Extreme Learning Machine with RELIEF-F (R-RKELM) for human activity recognition. RELIEF-F is applied to discard the attributes of samples with the negative values in the weights. A new sample selection approach, which is used to further reduce training samples and to replace the random selection part of Reduced-KELM, solves the unstable classification problem in the conventional Reduced-KELM and computation complexity problem. According to experimental results and statistical analysis, our proposed model obtains the best classification performances for human activity data sets than those of the baseline model, with an accuracy of 92.87 % for HAPT, 92.81 % for HARUS, and 86.92 % for Smartphone, respectively. |
---|