Mutually reinforcing motion-pose framework for pose invariant action recognition
Action recognition from videos has many potential applications. However, there are many unresolved challenges, such as pose-invariant recognition, robustness to occlusion and others. In this paper, we propose to combine motion of body parts and pose hypothesis generation validated with specific cano...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/142072 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-142072 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1420722020-06-15T08:54:54Z Mutually reinforcing motion-pose framework for pose invariant action recognition Ramanathan, Manoj Yau, Wei-Yun Thalmann, Nadia Magnenat Teoh, Eam Khwang School of Electrical and Electronic Engineering Institute for Media Innovation (IMI) Research Techno Plaza Engineering::Electrical and electronic engineering Aaction Recognition Pose-invariant Motion Feature Action recognition from videos has many potential applications. However, there are many unresolved challenges, such as pose-invariant recognition, robustness to occlusion and others. In this paper, we propose to combine motion of body parts and pose hypothesis generation validated with specific canonical poses observed in a novel mutually reinforcing framework to achieve pose-invariant action recognition. To capture the temporal dynamics of an action, we introduce temporal stick features computed using the stick poses obtained. The combination of pose-invariant kinematic features from motion, pose hypothesis and temporal stick features are used for action recognition, thus forming a mutually reinforcing framework that repeats until the action recognition result converges. The proposed mutual reinforcement framework is capable of handling changes in posture of the person, occlusion and partial view-invariance. We perform experiments on several benchmark datasets which showed the performance of the proposed algorithm and its ability to handle pose variation and occlusion. NRF (Natl Research Foundation, S’pore) ASTAR (Agency for Sci., Tech. and Research, S’pore) 2020-06-15T07:55:52Z 2020-06-15T07:55:52Z 2019 Journal Article Ramanathan, M., Yau, W.-Y., Thalmann, N. M., & Teoh, E. W. (2019). Mutually reinforcing motion-pose framework for pose invariant action recognition. International Journal of Biometrics, 11(2), 113-147. doi:10.1504/IJBM.2019.099014 1755-8301 https://hdl.handle.net/10356/142072 10.1504/IJBM.2019.099014 2 11 113 147 en International Journal of Biometrics © 2019 Inderscience Enterprises Ltd. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering Aaction Recognition Pose-invariant Motion Feature |
spellingShingle |
Engineering::Electrical and electronic engineering Aaction Recognition Pose-invariant Motion Feature Ramanathan, Manoj Yau, Wei-Yun Thalmann, Nadia Magnenat Teoh, Eam Khwang Mutually reinforcing motion-pose framework for pose invariant action recognition |
description |
Action recognition from videos has many potential applications. However, there are many unresolved challenges, such as pose-invariant recognition, robustness to occlusion and others. In this paper, we propose to combine motion of body parts and pose hypothesis generation validated with specific canonical poses observed in a novel mutually reinforcing framework to achieve pose-invariant action recognition. To capture the temporal dynamics of an action, we introduce temporal stick features computed using the stick poses obtained. The combination of pose-invariant kinematic features from motion, pose hypothesis and temporal stick features are used for action recognition, thus forming a mutually reinforcing framework that repeats until the action recognition result converges. The proposed mutual reinforcement framework is capable of handling changes in posture of the person, occlusion and partial view-invariance. We perform experiments on several benchmark datasets which showed the performance of the proposed algorithm and its ability to handle pose variation and occlusion. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Ramanathan, Manoj Yau, Wei-Yun Thalmann, Nadia Magnenat Teoh, Eam Khwang |
format |
Article |
author |
Ramanathan, Manoj Yau, Wei-Yun Thalmann, Nadia Magnenat Teoh, Eam Khwang |
author_sort |
Ramanathan, Manoj |
title |
Mutually reinforcing motion-pose framework for pose invariant action recognition |
title_short |
Mutually reinforcing motion-pose framework for pose invariant action recognition |
title_full |
Mutually reinforcing motion-pose framework for pose invariant action recognition |
title_fullStr |
Mutually reinforcing motion-pose framework for pose invariant action recognition |
title_full_unstemmed |
Mutually reinforcing motion-pose framework for pose invariant action recognition |
title_sort |
mutually reinforcing motion-pose framework for pose invariant action recognition |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/142072 |
_version_ |
1681057659304804352 |