A novel ensemble ELM for human activity recognition using smartphone sensors
Human activity recognition plays a unique role in many important applications, including ubiquitous computing, health-care services, and smart buildings. Due to the nonintrusive property of smartphones, smartphone sensors are widely used for the identification of human activities. Since the signals...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/150995 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Human activity recognition plays a unique role in many important applications, including ubiquitous computing, health-care services, and smart buildings. Due to the nonintrusive property of smartphones, smartphone sensors are widely used for the identification of human activities. Since the signals of smartphone sensors are quite noisy, feature engineering will be performed to extract more discriminant representations. Then, various machine learning algorithms can be employed to recognize different human activities. Extreme learning machine (ELM) has been shown to be effective in classification tasks with extremely fast learning speed. Due to its randomness property, it is naturally suitable for ensemble learning. In this paper, we propose a novel ensemble ELM algorithm for human activity recognition using smartphone sensors. Gaussian random projection is employed to initialize the input weights of base ELMs. By doing this, more diversities can be generated to boost the performance of ensemble learning. Real experimental data has been applied to evaluate the performance of our proposed approach. We also conduct a comparison of the proposed approach with some state-of-the-art approaches in the literature. The experimental results indicate that our proposed ensemble ELM approach outperforms these approaches and can achieve recognition accuracies of 97.35\% and 98.88\% on two datasets. |
---|