Pose-invariant kinematic features for action recognition
Recognition of actions from videos is a difficult task due to several factors like dynamic backgrounds, occlusion, pose-variations observed. To tackle the pose variation problem, we propose a simple method based on a novel set of pose-invariant kinematic features which are encoded in a human body ce...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/138068 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-138068 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1380682020-09-26T21:52:59Z Pose-invariant kinematic features for action recognition Ramanathan, Manoj Yau, Wei-Yun Teoh, Eam Khwang Thalmann, Nadia Magnenat School of Electrical and Electronic Engineering 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) Institute for Media Innovation (IMI) Engineering::Computer science and engineering Engineering::Electrical and electronic engineering Action Recognition Pose-invariance Recognition of actions from videos is a difficult task due to several factors like dynamic backgrounds, occlusion, pose-variations observed. To tackle the pose variation problem, we propose a simple method based on a novel set of pose-invariant kinematic features which are encoded in a human body centric space. The proposed framework begins with detection of neck point, which will serve as a origin of body centric space. We propose a deep learning based classifier to detect neck point based on the output of fully connected network layer. With the help of the detected neck, propagation mechanism is proposed to divide the foreground region into head, torso and leg grids. The motion observed in each of these body part grids are represented using a set of pose-invariant kinematic features. These features represent motion of foreground or body region with respect to the detected neck point's motion and encoded based on view in a human body centric space. Based on these features, poseinvariant action recognition can be achieved. Due to the body centric space is used, non-upright human posture actions can also be handled easily. To test its effectiveness in non-upright human postures in actions, a new dataset is introduced with 8 non-upright actions performed by 35 subjects in 3 different views. Experiments have been conducted on benchmark and newly proposed non-upright action dataset to identify limitations and get insights on the proposed framework. NRF (Natl Research Foundation, S’pore) ASTAR (Agency for Sci., Tech. and Research, S’pore) Accepted version 2020-04-23T04:31:11Z 2020-04-23T04:31:11Z 2018 Conference Paper Ramanathan, M., Yau, W.-Y., Teoh, E. K., & Thalmann, N. M. (2017). Pose-invariant kinematic features for action recognition. Proceedings of the 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 292-299. doi:10.1109/APSIPA.2017.8282038 9781538615430 https://hdl.handle.net/10356/138068 10.1109/APSIPA.2017.8282038 292 299 en © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/APSIPA.2017.8282038 application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Engineering::Electrical and electronic engineering Action Recognition Pose-invariance |
spellingShingle |
Engineering::Computer science and engineering Engineering::Electrical and electronic engineering Action Recognition Pose-invariance Ramanathan, Manoj Yau, Wei-Yun Teoh, Eam Khwang Thalmann, Nadia Magnenat Pose-invariant kinematic features for action recognition |
description |
Recognition of actions from videos is a difficult task due to several factors like dynamic backgrounds, occlusion, pose-variations observed. To tackle the pose variation problem, we propose a simple method based on a novel set of pose-invariant kinematic features which are encoded in a human body centric space. The proposed framework begins with detection of neck point, which will serve as a origin of body centric space. We propose a deep learning based classifier to detect neck point based on the output of fully connected network layer. With the help of the detected neck, propagation mechanism is proposed to divide the foreground region into head, torso and leg grids. The motion observed in each of these body part grids are represented using a set of pose-invariant kinematic features. These features represent motion of foreground or body region with respect to the detected neck point's motion and encoded based on view in a human body centric space. Based on these features, poseinvariant action recognition can be achieved. Due to the body centric space is used, non-upright human posture actions can also be handled easily. To test its effectiveness in non-upright human postures in actions, a new dataset is introduced with 8 non-upright actions performed by 35 subjects in 3 different views. Experiments have been conducted on benchmark and newly proposed non-upright action dataset to identify limitations and get insights on the proposed framework. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Ramanathan, Manoj Yau, Wei-Yun Teoh, Eam Khwang Thalmann, Nadia Magnenat |
format |
Conference or Workshop Item |
author |
Ramanathan, Manoj Yau, Wei-Yun Teoh, Eam Khwang Thalmann, Nadia Magnenat |
author_sort |
Ramanathan, Manoj |
title |
Pose-invariant kinematic features for action recognition |
title_short |
Pose-invariant kinematic features for action recognition |
title_full |
Pose-invariant kinematic features for action recognition |
title_fullStr |
Pose-invariant kinematic features for action recognition |
title_full_unstemmed |
Pose-invariant kinematic features for action recognition |
title_sort |
pose-invariant kinematic features for action recognition |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/138068 |
_version_ |
1681057673783541760 |