Egocentric activities of daily living recognition application using android platform

Human action recognition (HAR) systems determine the action being done by a person using a variety of algorithms. New applications in the domain have been leveraging the compactness as well as the multiple capabilities of smartphones in terms of sensing and processing power. These recent studies hav...

Full description

Saved in:
Bibliographic Details
Main Author: Canlas, Reich Rechner D.
Format: text
Language:English
Published: Animo Repository 2018
Subjects:
Online Access:https://animorepository.dlsu.edu.ph/etd_masteral/5632
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: De La Salle University
Language: English
id oai:animorepository.dlsu.edu.ph:etd_masteral-12470
record_format eprints
spelling oai:animorepository.dlsu.edu.ph:etd_masteral-124702021-01-30T01:33:26Z Egocentric activities of daily living recognition application using android platform Canlas, Reich Rechner D. Human action recognition (HAR) systems determine the action being done by a person using a variety of algorithms. New applications in the domain have been leveraging the compactness as well as the multiple capabilities of smartphones in terms of sensing and processing power. These recent studies have primarily depended on motion inputs captured by either a camera or an array of sensors. Rarely in the literature have both camera and mechanical sensor signals been used simultaneously in an HAR system. Taking all these into account, this study aims to develop a Human Action Recognition (HAR) application that uses both first-person perspective camera and sensor inputs on a device hosting the Android platform to improve upon existing egocentric HAR systems in terms of efficiency, portability, and accuracy. There were four input streams considered, namely, camera, accelerometer, gyroscope, and magnetometer. Each stream was fed into a combination of one- and two-dimensional Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) Recurrent Neural Networks. All of these streams and networks work in parallel, but their individual classifications were fed into a fully-connected late fusion network. The accuracy and other metrics of the system were evaluated on a selection of actions from both a reference dataset and a new dataset generated from video-sensor data pairs. Results have shown that each of the networks was effective at recognizing the actions considered within the scope of this study, but even more so when their outputs were fused. 2018-01-01T08:00:00Z text https://animorepository.dlsu.edu.ph/etd_masteral/5632 Master's Theses English Animo Repository Human activity recognition Neural networks (Computer science) Algorithms
institution De La Salle University
building De La Salle University Library
continent Asia
country Philippines
Philippines
content_provider De La Salle University Library
collection DLSU Institutional Repository
language English
topic Human activity recognition
Neural networks (Computer science)
Algorithms
spellingShingle Human activity recognition
Neural networks (Computer science)
Algorithms
Canlas, Reich Rechner D.
Egocentric activities of daily living recognition application using android platform
description Human action recognition (HAR) systems determine the action being done by a person using a variety of algorithms. New applications in the domain have been leveraging the compactness as well as the multiple capabilities of smartphones in terms of sensing and processing power. These recent studies have primarily depended on motion inputs captured by either a camera or an array of sensors. Rarely in the literature have both camera and mechanical sensor signals been used simultaneously in an HAR system. Taking all these into account, this study aims to develop a Human Action Recognition (HAR) application that uses both first-person perspective camera and sensor inputs on a device hosting the Android platform to improve upon existing egocentric HAR systems in terms of efficiency, portability, and accuracy. There were four input streams considered, namely, camera, accelerometer, gyroscope, and magnetometer. Each stream was fed into a combination of one- and two-dimensional Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) Recurrent Neural Networks. All of these streams and networks work in parallel, but their individual classifications were fed into a fully-connected late fusion network. The accuracy and other metrics of the system were evaluated on a selection of actions from both a reference dataset and a new dataset generated from video-sensor data pairs. Results have shown that each of the networks was effective at recognizing the actions considered within the scope of this study, but even more so when their outputs were fused.
format text
author Canlas, Reich Rechner D.
author_facet Canlas, Reich Rechner D.
author_sort Canlas, Reich Rechner D.
title Egocentric activities of daily living recognition application using android platform
title_short Egocentric activities of daily living recognition application using android platform
title_full Egocentric activities of daily living recognition application using android platform
title_fullStr Egocentric activities of daily living recognition application using android platform
title_full_unstemmed Egocentric activities of daily living recognition application using android platform
title_sort egocentric activities of daily living recognition application using android platform
publisher Animo Repository
publishDate 2018
url https://animorepository.dlsu.edu.ph/etd_masteral/5632
_version_ 1772835953884266496