Transductive zero-shot action recognition via visually connected graph convolutional networks

With the explosive growth of action categories, zero-shot action recognition aims to extend a well-trained model to novel/unseen classes. To bridge the large knowledge gap between seen and unseen classes, in this brief, we visually associate unseen actions with seen categories in a visually connecte...

Full description

Saved in:
Bibliographic Details
Main Authors: XU, Yangyang, HAN, Chu, QIN, Jing, XU, Xuemiao, HAN, Guoqiang, HE, Shengfeng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7883
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8886
record_format dspace
spelling sg-smu-ink.sis_research-88862023-06-15T09:00:05Z Transductive zero-shot action recognition via visually connected graph convolutional networks XU, Yangyang HAN, Chu QIN, Jing XU, Xuemiao HAN, Guoqiang HE, Shengfeng With the explosive growth of action categories, zero-shot action recognition aims to extend a well-trained model to novel/unseen classes. To bridge the large knowledge gap between seen and unseen classes, in this brief, we visually associate unseen actions with seen categories in a visually connected graph, and the knowledge is then transferred from the visual features space to semantic space via the grouped attention graph convolutional networks (GAGCNs). In particular, we extract visual features for all the actions, and a visually connected graph is built to attach seen actions to visually similar unseen categories. Moreover, the proposed grouped attention mechanism exploits the hierarchical knowledge in the graph so that the GAGCN enables propagating the visual-semantic connections from seen actions to unseen ones. We extensively evaluate the proposed method on three data sets: HMDB51, UCF101, and NTU RGB + D. Experimental results show that the GAGCN outperforms state-of-the-art methods. 2021-08-01T07:00:00Z text https://ink.library.smu.edu.sg/sis_research/7883 info:doi/10.1109/TNNLS.2020.3015848 Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Visualization Feature extraction Semantics Correlation Computational modeling Learning systems Explosives Action recognition graph convolutional network (GCN) zero-shot learning (ZSL) Information Security
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Visualization
Feature extraction
Semantics
Correlation
Computational modeling
Learning systems
Explosives
Action recognition
graph convolutional network (GCN)
zero-shot learning (ZSL)
Information Security
spellingShingle Visualization
Feature extraction
Semantics
Correlation
Computational modeling
Learning systems
Explosives
Action recognition
graph convolutional network (GCN)
zero-shot learning (ZSL)
Information Security
XU, Yangyang
HAN, Chu
QIN, Jing
XU, Xuemiao
HAN, Guoqiang
HE, Shengfeng
Transductive zero-shot action recognition via visually connected graph convolutional networks
description With the explosive growth of action categories, zero-shot action recognition aims to extend a well-trained model to novel/unseen classes. To bridge the large knowledge gap between seen and unseen classes, in this brief, we visually associate unseen actions with seen categories in a visually connected graph, and the knowledge is then transferred from the visual features space to semantic space via the grouped attention graph convolutional networks (GAGCNs). In particular, we extract visual features for all the actions, and a visually connected graph is built to attach seen actions to visually similar unseen categories. Moreover, the proposed grouped attention mechanism exploits the hierarchical knowledge in the graph so that the GAGCN enables propagating the visual-semantic connections from seen actions to unseen ones. We extensively evaluate the proposed method on three data sets: HMDB51, UCF101, and NTU RGB + D. Experimental results show that the GAGCN outperforms state-of-the-art methods.
format text
author XU, Yangyang
HAN, Chu
QIN, Jing
XU, Xuemiao
HAN, Guoqiang
HE, Shengfeng
author_facet XU, Yangyang
HAN, Chu
QIN, Jing
XU, Xuemiao
HAN, Guoqiang
HE, Shengfeng
author_sort XU, Yangyang
title Transductive zero-shot action recognition via visually connected graph convolutional networks
title_short Transductive zero-shot action recognition via visually connected graph convolutional networks
title_full Transductive zero-shot action recognition via visually connected graph convolutional networks
title_fullStr Transductive zero-shot action recognition via visually connected graph convolutional networks
title_full_unstemmed Transductive zero-shot action recognition via visually connected graph convolutional networks
title_sort transductive zero-shot action recognition via visually connected graph convolutional networks
publisher Institutional Knowledge at Singapore Management University
publishDate 2021
url https://ink.library.smu.edu.sg/sis_research/7883
_version_ 1770576575787433984