Knowledge-aware multimodal dialogue systems

By offering a natural way for information seeking, multimodal dialogue systems are attracting increasing attention in several domains such as retail, travel etc. However, most existing dialogue systems are limited to textual modality, which cannot be easily extended to capture the rich semantics in...

Full description

Saved in:
Bibliographic Details
Main Authors: LIAO, Lizi, MA, Yunshan, HE, Xiangnan, HONG, Richang, CHUA, Tat-Seng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2018
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7722
https://ink.library.smu.edu.sg/context/sis_research/article/8725/viewcontent/Knowledge_aware_multimodal_dialogue_systems.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8725
record_format dspace
spelling sg-smu-ink.sis_research-87252023-01-10T02:52:30Z Knowledge-aware multimodal dialogue systems LIAO, Lizi MA, Yunshan HE, Xiangnan HONG, Richang CHUA, Tat-Seng By offering a natural way for information seeking, multimodal dialogue systems are attracting increasing attention in several domains such as retail, travel etc. However, most existing dialogue systems are limited to textual modality, which cannot be easily extended to capture the rich semantics in visual modality such as product images. For example, in fashion domain, the visual appearance of clothes and matching styles play a crucial role in understanding the user's intention. Without considering these, the dialogue agent may fail to generate desirable responses for users. In this paper, we present a Knowledge-aware Multimodal Dialogue (KMD) model to address the limitation of text-based dialogue systems. It gives special consideration to the semantics and domain knowledge revealed in visual content, and is featured with three key components. First, we build a taxonomy-based learning module to capture the fine-grained semantics in images the category and attributes of a product). Second, we propose an end-to-end neural conversational model to generate responses based on the conversation history, visual semantics, and domain knowledge. Lastly, to avoid inconsistent dialogues, we adopt a deep reinforcement learning method which accounts for future rewards to optimize the neural conversational model. We perform extensive evaluation on a multi-turn task-oriented dialogue dataset in fashion domain. Experiment results show that our method significantly outperforms state-of-the-art methods, demonstrating the efficacy of modeling visual modality and domain knowledge for dialogue systems. 2018-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7722 info:doi/10.1145/3240508.3240605 https://ink.library.smu.edu.sg/context/sis_research/article/8725/viewcontent/Knowledge_aware_multimodal_dialogue_systems.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Domain knowledge Fashion Multimodal dialogue Artificial Intelligence and Robotics Databases and Information Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Domain knowledge
Fashion
Multimodal dialogue
Artificial Intelligence and Robotics
Databases and Information Systems
spellingShingle Domain knowledge
Fashion
Multimodal dialogue
Artificial Intelligence and Robotics
Databases and Information Systems
LIAO, Lizi
MA, Yunshan
HE, Xiangnan
HONG, Richang
CHUA, Tat-Seng
Knowledge-aware multimodal dialogue systems
description By offering a natural way for information seeking, multimodal dialogue systems are attracting increasing attention in several domains such as retail, travel etc. However, most existing dialogue systems are limited to textual modality, which cannot be easily extended to capture the rich semantics in visual modality such as product images. For example, in fashion domain, the visual appearance of clothes and matching styles play a crucial role in understanding the user's intention. Without considering these, the dialogue agent may fail to generate desirable responses for users. In this paper, we present a Knowledge-aware Multimodal Dialogue (KMD) model to address the limitation of text-based dialogue systems. It gives special consideration to the semantics and domain knowledge revealed in visual content, and is featured with three key components. First, we build a taxonomy-based learning module to capture the fine-grained semantics in images the category and attributes of a product). Second, we propose an end-to-end neural conversational model to generate responses based on the conversation history, visual semantics, and domain knowledge. Lastly, to avoid inconsistent dialogues, we adopt a deep reinforcement learning method which accounts for future rewards to optimize the neural conversational model. We perform extensive evaluation on a multi-turn task-oriented dialogue dataset in fashion domain. Experiment results show that our method significantly outperforms state-of-the-art methods, demonstrating the efficacy of modeling visual modality and domain knowledge for dialogue systems.
format text
author LIAO, Lizi
MA, Yunshan
HE, Xiangnan
HONG, Richang
CHUA, Tat-Seng
author_facet LIAO, Lizi
MA, Yunshan
HE, Xiangnan
HONG, Richang
CHUA, Tat-Seng
author_sort LIAO, Lizi
title Knowledge-aware multimodal dialogue systems
title_short Knowledge-aware multimodal dialogue systems
title_full Knowledge-aware multimodal dialogue systems
title_fullStr Knowledge-aware multimodal dialogue systems
title_full_unstemmed Knowledge-aware multimodal dialogue systems
title_sort knowledge-aware multimodal dialogue systems
publisher Institutional Knowledge at Singapore Management University
publishDate 2018
url https://ink.library.smu.edu.sg/sis_research/7722
https://ink.library.smu.edu.sg/context/sis_research/article/8725/viewcontent/Knowledge_aware_multimodal_dialogue_systems.pdf
_version_ 1770576421302829056