A two-stage feature selection algorithm based on redundancy and relevance

Resulting from technological advancements, it is now possible to regularly collect large volumes of data and to use these data for different applications. However, this obviously results in having very large numbers of samples as well as features. Dealing with high-volume and high-dimensional data i...

Full description

Saved in:
Bibliographic Details
Main Authors: Antioquia, Arren Matthew C., Azcarraga, Arnulfo P.
Format: text
Published: Animo Repository 2018
Subjects:
Online Access:https://animorepository.dlsu.edu.ph/faculty_research/4428
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: De La Salle University
Description
Summary:Resulting from technological advancements, it is now possible to regularly collect large volumes of data and to use these data for different applications. However, this obviously results in having very large numbers of samples as well as features. Dealing with high-volume and high-dimensional data is indeed a major challenge for machine learning algorithms, especially in terms of memory requirement and model training time. Fortunately, many of the features in the collected data are usually correlated, and some can be even be completely irrelevant for specific classification or pattern recognition tasks. By the nature of high-dimensional data, the large set of features can be reduced by removing redundant and irrelevant features. A two-stage feature selection algorithm based on feature redundancy and feature relevance is proposed in this paper. The proposed feature selection algorithm employs a hybrid model which combines filter and wrapper schemes to select the optimal feature subset. Five datasets from different domains are used to test the performance of the proposed feature selection algorithm based on three well-known machine learning algorithms, namely, k-Nearest Neighbor, Decision Trees, and Multilayer Perceptrons. Despite reducing the number of features using the proposed feature selection approach, the classification performances of the selected feature subsets are on par with or even significantly higher than the performance of the original feature set. Comparing with other state-of-the-art feature selection algorithms, the proposed method achieved higher classification accuracy with even lower number of features. © 2018 IEEE.