Multimodal transformer networks for end-to-end video-grounded dialogue systems

Developing Video-Grounded Dialogue Systems (VGDS), where a dialogue is conducted based on visual and audio aspects of a given video, is significantly more challenging than traditional image or text-grounded dialogue systems because (1) feature space of videos span across multiple picture frames, mak...

Full description

Saved in:
Bibliographic Details
Main Authors: LE, Hung, SAHOO, Doyen, CHEN, Nancy F., HOI, Steven C. H.
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2019
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/4428
https://ink.library.smu.edu.sg/context/sis_research/article/5431/viewcontent/P19_1564.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-5431
record_format dspace
spelling sg-smu-ink.sis_research-54312020-04-23T05:01:15Z Multimodal transformer networks for end-to-end video-grounded dialogue systems LE, Hung SAHOO, Doyen CHEN, Nancy F. HOI, Steven C. H. Developing Video-Grounded Dialogue Systems (VGDS), where a dialogue is conducted based on visual and audio aspects of a given video, is significantly more challenging than traditional image or text-grounded dialogue systems because (1) feature space of videos span across multiple picture frames, making it difficult to obtain semantic information; and (2) a dialogue agent must perceive and process information from different modalities (audio, video, caption, etc.) to obtain a comprehensive understanding. Most existing work is based on RNNs and sequence-to-sequence architectures, which are not very effective for capturing complex long-term dependencies (like in videos). To overcome this, we propose Multimodal Transformer Networks (MTN) to encode videos and incorporate information from different modalities. We also propose query-aware attention through an auto-encoder to extract query-aware features from non-text modalities. We develop a training procedure to simulate token-level decoding to improve the quality of generated responses during inference. We get state of the art performance on Dialogue System Technology Challenge 7 (DSTC7). Our model also generalizes to another multimodal visual-grounded dialogue task, and obtains promising performance. 2019-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/4428 https://ink.library.smu.edu.sg/context/sis_research/article/5431/viewcontent/P19_1564.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems Graphics and Human Computer Interfaces OS and Networks
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Databases and Information Systems
Graphics and Human Computer Interfaces
OS and Networks
spellingShingle Databases and Information Systems
Graphics and Human Computer Interfaces
OS and Networks
LE, Hung
SAHOO, Doyen
CHEN, Nancy F.
HOI, Steven C. H.
Multimodal transformer networks for end-to-end video-grounded dialogue systems
description Developing Video-Grounded Dialogue Systems (VGDS), where a dialogue is conducted based on visual and audio aspects of a given video, is significantly more challenging than traditional image or text-grounded dialogue systems because (1) feature space of videos span across multiple picture frames, making it difficult to obtain semantic information; and (2) a dialogue agent must perceive and process information from different modalities (audio, video, caption, etc.) to obtain a comprehensive understanding. Most existing work is based on RNNs and sequence-to-sequence architectures, which are not very effective for capturing complex long-term dependencies (like in videos). To overcome this, we propose Multimodal Transformer Networks (MTN) to encode videos and incorporate information from different modalities. We also propose query-aware attention through an auto-encoder to extract query-aware features from non-text modalities. We develop a training procedure to simulate token-level decoding to improve the quality of generated responses during inference. We get state of the art performance on Dialogue System Technology Challenge 7 (DSTC7). Our model also generalizes to another multimodal visual-grounded dialogue task, and obtains promising performance.
format text
author LE, Hung
SAHOO, Doyen
CHEN, Nancy F.
HOI, Steven C. H.
author_facet LE, Hung
SAHOO, Doyen
CHEN, Nancy F.
HOI, Steven C. H.
author_sort LE, Hung
title Multimodal transformer networks for end-to-end video-grounded dialogue systems
title_short Multimodal transformer networks for end-to-end video-grounded dialogue systems
title_full Multimodal transformer networks for end-to-end video-grounded dialogue systems
title_fullStr Multimodal transformer networks for end-to-end video-grounded dialogue systems
title_full_unstemmed Multimodal transformer networks for end-to-end video-grounded dialogue systems
title_sort multimodal transformer networks for end-to-end video-grounded dialogue systems
publisher Institutional Knowledge at Singapore Management University
publishDate 2019
url https://ink.library.smu.edu.sg/sis_research/4428
https://ink.library.smu.edu.sg/context/sis_research/article/5431/viewcontent/P19_1564.pdf
_version_ 1770574766834450432