Video graph transformer for video question answering

This paper proposes a Video Graph Transformer (VGT) model for Video Quetion Answering (VideoQA). VGT’s uniqueness are two-fold: 1) it designs a dynamic graph transformer module which encodes video by explicitly capturing the visual objects, their relations, and dynamics for complex spatio-temporal r...

Full description

Saved in:
Bibliographic Details
Main Authors: XIAO, Junbin, ZHOU, Pan, CHUA, Tat-Seng, YAN, Shuicheng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8994
https://ink.library.smu.edu.sg/context/sis_research/article/9997/viewcontent/2022_ECCV_VQA.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9997
record_format dspace
spelling sg-smu-ink.sis_research-99972024-07-25T08:25:09Z Video graph transformer for video question answering XIAO, Junbin ZHOU, Pan CHUA, Tat-Seng YAN, Shuicheng This paper proposes a Video Graph Transformer (VGT) model for Video Quetion Answering (VideoQA). VGT’s uniqueness are two-fold: 1) it designs a dynamic graph transformer module which encodes video by explicitly capturing the visual objects, their relations, and dynamics for complex spatio-temporal reasoning; and 2) it exploits disentangled video and text Transformers for relevance comparison between the video and text to perform QA, instead of entangled crossmodal Transformer for answer classification. Vision-text communication is done by additional cross-modal interaction modules. With more reasonable video encoding and QA solution, we show that VGT can achieve much better performances on VideoQA tasks that challenge dynamic relation reasoning than prior arts in the pretraining-free scenario. Its performances even surpass those models that are pretrained with millions of external data. We further show that VGT can also benefit a lot from selfsupervised cross-modal pretraining, yet with orders of magnitude smaller data. These results clearly demonstrate the effectiveness and superiority of VGT, and reveal its potential for more data-efficient pretraining. With comprehensive analyses and some heuristic observations, we hope that VGT can promote VQA research beyond coarse recognition/description towards fine-grained relation reasoning in realistic videos. Our code is available at https://github.com/sail-sg/VGT . 2022-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8994 info:doi/10.1007/978-3-031-20059-5_3 https://ink.library.smu.edu.sg/context/sis_research/article/9997/viewcontent/2022_ECCV_VQA.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Dynamic visual graph Transformer VideoQA Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Dynamic visual graph
Transformer
VideoQA
Graphics and Human Computer Interfaces
spellingShingle Dynamic visual graph
Transformer
VideoQA
Graphics and Human Computer Interfaces
XIAO, Junbin
ZHOU, Pan
CHUA, Tat-Seng
YAN, Shuicheng
Video graph transformer for video question answering
description This paper proposes a Video Graph Transformer (VGT) model for Video Quetion Answering (VideoQA). VGT’s uniqueness are two-fold: 1) it designs a dynamic graph transformer module which encodes video by explicitly capturing the visual objects, their relations, and dynamics for complex spatio-temporal reasoning; and 2) it exploits disentangled video and text Transformers for relevance comparison between the video and text to perform QA, instead of entangled crossmodal Transformer for answer classification. Vision-text communication is done by additional cross-modal interaction modules. With more reasonable video encoding and QA solution, we show that VGT can achieve much better performances on VideoQA tasks that challenge dynamic relation reasoning than prior arts in the pretraining-free scenario. Its performances even surpass those models that are pretrained with millions of external data. We further show that VGT can also benefit a lot from selfsupervised cross-modal pretraining, yet with orders of magnitude smaller data. These results clearly demonstrate the effectiveness and superiority of VGT, and reveal its potential for more data-efficient pretraining. With comprehensive analyses and some heuristic observations, we hope that VGT can promote VQA research beyond coarse recognition/description towards fine-grained relation reasoning in realistic videos. Our code is available at https://github.com/sail-sg/VGT .
format text
author XIAO, Junbin
ZHOU, Pan
CHUA, Tat-Seng
YAN, Shuicheng
author_facet XIAO, Junbin
ZHOU, Pan
CHUA, Tat-Seng
YAN, Shuicheng
author_sort XIAO, Junbin
title Video graph transformer for video question answering
title_short Video graph transformer for video question answering
title_full Video graph transformer for video question answering
title_fullStr Video graph transformer for video question answering
title_full_unstemmed Video graph transformer for video question answering
title_sort video graph transformer for video question answering
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/8994
https://ink.library.smu.edu.sg/context/sis_research/article/9997/viewcontent/2022_ECCV_VQA.pdf
_version_ 1814047703339892736