Recognising activities using motion history

Human Activity Recognition (HAR) is a daunting task of computer vision. Complex algorithms are required to recognize actions performed, through spatial-temporal information obtained from video sequences. Large computing power is required to process this information. Motion History Images (MHI) ca...

Full description

Saved in:
Bibliographic Details
Main Author: Xu, Wilson Weixuan
Other Authors: Chua Chin Seng
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2020
Subjects:
Online Access:https://hdl.handle.net/10356/138402
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-138402
record_format dspace
spelling sg-ntu-dr.10356-1384022023-07-07T18:10:37Z Recognising activities using motion history Xu, Wilson Weixuan Chua Chin Seng School of Electrical and Electronic Engineering ecschua@ntu.edu.sg Engineering::Electrical and electronic engineering::Computer hardware, software and systems Human Activity Recognition (HAR) is a daunting task of computer vision. Complex algorithms are required to recognize actions performed, through spatial-temporal information obtained from video sequences. Large computing power is required to process this information. Motion History Images (MHI) can represent this information in a single image, hence it can be used to reduce the complexities and hardware demands in implementing HAR. The objective of this project was to implement HAR using MHI. The implementation first involved the acquisition of video sequences. Frames from the video sequences were then pre-processed and converted into MHI for annotation and creation of dataset. A convolutional neural network (CNN) model was used to train on the dataset. The model was then validated and tested to evaluate its effectiveness, before being integrated into the HAR program. While the model performed very well for the validation set, there were mixed results for the testing set. The poorer results were due to insufficient intraclass variation in some classes in the training set, and the model responded not as well to actions that were slightly different. However, the better results demonstrate that certain actions can be recognized well in a generalized setting. In integrating the HAR program, results showed that it is unable to run in real-time due to hardware constraints, but real-time speeds are attainable, through using better computing hardware. Future works on this project include varying the conditions used for the recording of actions performed to enable the model to generalize better in HAR. The use of better computing hardware will enable the HAR program to run in real-time, and in turn deployable in real life applications. Bachelor of Engineering (Electrical and Electronic Engineering) 2020-05-05T13:34:12Z 2020-05-05T13:34:12Z 2020 Final Year Project (FYP) https://hdl.handle.net/10356/138402 en A1048-191 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering::Computer hardware, software and systems
spellingShingle Engineering::Electrical and electronic engineering::Computer hardware, software and systems
Xu, Wilson Weixuan
Recognising activities using motion history
description Human Activity Recognition (HAR) is a daunting task of computer vision. Complex algorithms are required to recognize actions performed, through spatial-temporal information obtained from video sequences. Large computing power is required to process this information. Motion History Images (MHI) can represent this information in a single image, hence it can be used to reduce the complexities and hardware demands in implementing HAR. The objective of this project was to implement HAR using MHI. The implementation first involved the acquisition of video sequences. Frames from the video sequences were then pre-processed and converted into MHI for annotation and creation of dataset. A convolutional neural network (CNN) model was used to train on the dataset. The model was then validated and tested to evaluate its effectiveness, before being integrated into the HAR program. While the model performed very well for the validation set, there were mixed results for the testing set. The poorer results were due to insufficient intraclass variation in some classes in the training set, and the model responded not as well to actions that were slightly different. However, the better results demonstrate that certain actions can be recognized well in a generalized setting. In integrating the HAR program, results showed that it is unable to run in real-time due to hardware constraints, but real-time speeds are attainable, through using better computing hardware. Future works on this project include varying the conditions used for the recording of actions performed to enable the model to generalize better in HAR. The use of better computing hardware will enable the HAR program to run in real-time, and in turn deployable in real life applications.
author2 Chua Chin Seng
author_facet Chua Chin Seng
Xu, Wilson Weixuan
format Final Year Project
author Xu, Wilson Weixuan
author_sort Xu, Wilson Weixuan
title Recognising activities using motion history
title_short Recognising activities using motion history
title_full Recognising activities using motion history
title_fullStr Recognising activities using motion history
title_full_unstemmed Recognising activities using motion history
title_sort recognising activities using motion history
publisher Nanyang Technological University
publishDate 2020
url https://hdl.handle.net/10356/138402
_version_ 1772826817553498112