Mutually reinforcing motion-pose framework for pose invariant action recognition

Action recognition from videos has many potential applications. However, there are many unresolved challenges, such as pose-invariant recognition, robustness to occlusion and others. In this paper, we propose to combine motion of body parts and pose hypothesis generation validated with specific cano...

全面介紹

Saved in:
書目詳細資料
Main Authors: Ramanathan, Manoj, Yau, Wei-Yun, Thalmann, Nadia Magnenat, Teoh, Eam Khwang
其他作者: School of Electrical and Electronic Engineering
格式: Article
語言:English
出版: 2020
主題:
在線閱讀:https://hdl.handle.net/10356/142072
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:Action recognition from videos has many potential applications. However, there are many unresolved challenges, such as pose-invariant recognition, robustness to occlusion and others. In this paper, we propose to combine motion of body parts and pose hypothesis generation validated with specific canonical poses observed in a novel mutually reinforcing framework to achieve pose-invariant action recognition. To capture the temporal dynamics of an action, we introduce temporal stick features computed using the stick poses obtained. The combination of pose-invariant kinematic features from motion, pose hypothesis and temporal stick features are used for action recognition, thus forming a mutually reinforcing framework that repeats until the action recognition result converges. The proposed mutual reinforcement framework is capable of handling changes in posture of the person, occlusion and partial view-invariance. We perform experiments on several benchmark datasets which showed the performance of the proposed algorithm and its ability to handle pose variation and occlusion.