HUMAN ACTIVITY RECOGNITION USING WEARABLE DEVICES AND DEEP LEARNING
Human Activity Recognition (HAR) based on motion sensors, data such as accelerometer, gyroscope, and magnetometer sensors from wearable devices, gives benefits in the healthcare sector, particularly in patient monitoring in indoor environments. This research aims to develop an optimal algorithm f...
Saved in:
Main Author: | |
---|---|
Format: | Theses |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/86059 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
id |
id-itb.:86059 |
---|---|
spelling |
id-itb.:860592024-09-13T08:41:26ZHUMAN ACTIVITY RECOGNITION USING WEARABLE DEVICES AND DEEP LEARNING Parluhutan Hutabarat, James Indonesia Theses human activity recognition, motion sensors, HAR, MFNN, CNN, LSTM, quantization, optimization, edge devices INSTITUT TEKNOLOGI BANDUNG https://digilib.itb.ac.id/gdl/view/86059 Human Activity Recognition (HAR) based on motion sensors, data such as accelerometer, gyroscope, and magnetometer sensors from wearable devices, gives benefits in the healthcare sector, particularly in patient monitoring in indoor environments. This research aims to develop an optimal algorithm for human activity recognition based on motion sensors, such as Emotibit. The deep learning model was conducted, including Multi-Layer Feedforward Neural Network (MFNN), Convolutional Neural Network (CNN), and Long-Short Term Memory (LSTM). These models were configured using the Optuna framework for hyperparameter optimization. The configuration results in an accuracy of 94.62% for MFNN, 92.90% for CNN, and 97.52% for LSTM. The number of sensor input channels leads to more accurate predictions. Additionally, this research implements deep learning models on edge devices, with optimization solutions utilizing the model weight quantization process. The optimization was applied to the LSTM model with the highest accuracy. The experiment showed that the model size was reduced by up to 90.98% of its original size. This size reduction allows the model to be implemented on edge devices like the Raspberry Pi 4B with limited resources. After implementation, the model achieved an accuracy of 94.61% with an inference time of approximately 96 milliseconds. Lastly, the time from data transmission to activity prediction displayed on the user interface was measured, resulting in a duration of 114 milliseconds. text |
institution |
Institut Teknologi Bandung |
building |
Institut Teknologi Bandung Library |
continent |
Asia |
country |
Indonesia Indonesia |
content_provider |
Institut Teknologi Bandung |
collection |
Digital ITB |
language |
Indonesia |
description |
Human Activity Recognition (HAR) based on motion sensors, data such as
accelerometer, gyroscope, and magnetometer sensors from wearable devices, gives
benefits in the healthcare sector, particularly in patient monitoring in indoor
environments. This research aims to develop an optimal algorithm for human
activity recognition based on motion sensors, such as Emotibit. The deep learning
model was conducted, including Multi-Layer Feedforward Neural Network
(MFNN), Convolutional Neural Network (CNN), and Long-Short Term Memory
(LSTM). These models were configured using the Optuna framework for
hyperparameter optimization. The configuration results in an accuracy of 94.62%
for MFNN, 92.90% for CNN, and 97.52% for LSTM. The number of sensor input
channels leads to more accurate predictions. Additionally, this research
implements deep learning models on edge devices, with optimization solutions
utilizing the model weight quantization process. The optimization was applied to
the LSTM model with the highest accuracy. The experiment showed that the model
size was reduced by up to 90.98% of its original size. This size reduction allows the
model to be implemented on edge devices like the Raspberry Pi 4B with limited
resources. After implementation, the model achieved an accuracy of 94.61% with
an inference time of approximately 96 milliseconds. Lastly, the time from data
transmission to activity prediction displayed on the user interface was measured,
resulting in a duration of 114 milliseconds. |
format |
Theses |
author |
Parluhutan Hutabarat, James |
spellingShingle |
Parluhutan Hutabarat, James HUMAN ACTIVITY RECOGNITION USING WEARABLE DEVICES AND DEEP LEARNING |
author_facet |
Parluhutan Hutabarat, James |
author_sort |
Parluhutan Hutabarat, James |
title |
HUMAN ACTIVITY RECOGNITION USING WEARABLE DEVICES AND DEEP LEARNING |
title_short |
HUMAN ACTIVITY RECOGNITION USING WEARABLE DEVICES AND DEEP LEARNING |
title_full |
HUMAN ACTIVITY RECOGNITION USING WEARABLE DEVICES AND DEEP LEARNING |
title_fullStr |
HUMAN ACTIVITY RECOGNITION USING WEARABLE DEVICES AND DEEP LEARNING |
title_full_unstemmed |
HUMAN ACTIVITY RECOGNITION USING WEARABLE DEVICES AND DEEP LEARNING |
title_sort |
human activity recognition using wearable devices and deep learning |
url |
https://digilib.itb.ac.id/gdl/view/86059 |
_version_ |
1822283315389923328 |