Reflecting on experiences for response generation

Multimodal dialogue systems attract much attention recently, but they are far from skills like: 1) automatically generate context- specific responses instead of safe but general responses; 2) naturally coordinate between the different information modalities (e.g. text and image) in responses; 3) int...

Full description

Saved in:
Bibliographic Details
Main Authors: YE, Chenchen, LIAO, Lizi, LIU, Suyu, CHUA, Tat-Seng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7579
https://ink.library.smu.edu.sg/context/sis_research/article/8582/viewcontent/Reflecting_on_experiences_for_response_generation.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8582
record_format dspace
spelling sg-smu-ink.sis_research-85822022-12-12T08:08:20Z Reflecting on experiences for response generation YE, Chenchen LIAO, Lizi LIU, Suyu CHUA, Tat-Seng Multimodal dialogue systems attract much attention recently, but they are far from skills like: 1) automatically generate context- specific responses instead of safe but general responses; 2) naturally coordinate between the different information modalities (e.g. text and image) in responses; 3) intuitively explain the reasons for generated responses and improve a specific response without re-training the whole model. To approach these goals, we propose a different angle for the task - Reflecting Experiences for Response Generation (RERG). This is supported by the fact that generating a response from scratch can be hard, but much easier if we can access other similar dialogue contexts and the corresponding responses. In particular, RERG first uses a multimodal contrastive learning enhanced retrieval model for soliciting similar dialogue instances. It then employs a cross copy based reuse model to explore the current dialogue context (vertical) and similar dialogue instances' responses (horizontal) for response generation simultaneously. Experimental results demonstrate that our model outperforms other state-of-the-art models on both automatic metrics and human evaluation. Moreover, RERG naturally provides supporting dialogue instances for better explainability. It also has a strong capability in adapting to unseen dialogue settings by simply adding related samples to the retrieval datastore without re-training the whole model. 2022-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7579 info:doi/10.1145/3503161.3548305 https://ink.library.smu.edu.sg/context/sis_research/article/8582/viewcontent/Reflecting_on_experiences_for_response_generation.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Case-based reasoning Response generation Contrastive learning Computer Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Case-based reasoning
Response generation
Contrastive learning
Computer Engineering
spellingShingle Case-based reasoning
Response generation
Contrastive learning
Computer Engineering
YE, Chenchen
LIAO, Lizi
LIU, Suyu
CHUA, Tat-Seng
Reflecting on experiences for response generation
description Multimodal dialogue systems attract much attention recently, but they are far from skills like: 1) automatically generate context- specific responses instead of safe but general responses; 2) naturally coordinate between the different information modalities (e.g. text and image) in responses; 3) intuitively explain the reasons for generated responses and improve a specific response without re-training the whole model. To approach these goals, we propose a different angle for the task - Reflecting Experiences for Response Generation (RERG). This is supported by the fact that generating a response from scratch can be hard, but much easier if we can access other similar dialogue contexts and the corresponding responses. In particular, RERG first uses a multimodal contrastive learning enhanced retrieval model for soliciting similar dialogue instances. It then employs a cross copy based reuse model to explore the current dialogue context (vertical) and similar dialogue instances' responses (horizontal) for response generation simultaneously. Experimental results demonstrate that our model outperforms other state-of-the-art models on both automatic metrics and human evaluation. Moreover, RERG naturally provides supporting dialogue instances for better explainability. It also has a strong capability in adapting to unseen dialogue settings by simply adding related samples to the retrieval datastore without re-training the whole model.
format text
author YE, Chenchen
LIAO, Lizi
LIU, Suyu
CHUA, Tat-Seng
author_facet YE, Chenchen
LIAO, Lizi
LIU, Suyu
CHUA, Tat-Seng
author_sort YE, Chenchen
title Reflecting on experiences for response generation
title_short Reflecting on experiences for response generation
title_full Reflecting on experiences for response generation
title_fullStr Reflecting on experiences for response generation
title_full_unstemmed Reflecting on experiences for response generation
title_sort reflecting on experiences for response generation
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/7579
https://ink.library.smu.edu.sg/context/sis_research/article/8582/viewcontent/Reflecting_on_experiences_for_response_generation.pdf
_version_ 1770576376590499840