PCQPR : Proactive conversational question planning with reflection

In the realm of multi-intent spoken language understanding, recent advancements have leveraged the potential of prompt learning frameworks. However, critical gaps exist in these frameworks: the lack of explicit modeling of dual-task dependencies and the oversight of task-specific semantic difference...

Full description

Saved in:
Bibliographic Details
Main Authors: GUO, Shasha, LIAO, Lizi, ZHANG, Jing, LI, Cuiping, CHENG, Hong
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9692
https://ink.library.smu.edu.sg/context/sis_research/article/10692/viewcontent/2024.emnlp_main.631.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10692
record_format dspace
spelling sg-smu-ink.sis_research-106922024-11-28T09:06:54Z PCQPR : Proactive conversational question planning with reflection GUO, Shasha LIAO, Lizi ZHANG, Jing LI, Cuiping CHENG, Hong In the realm of multi-intent spoken language understanding, recent advancements have leveraged the potential of prompt learning frameworks. However, critical gaps exist in these frameworks: the lack of explicit modeling of dual-task dependencies and the oversight of task-specific semantic differences among utterances. To address these shortcomings, we propose DC-Instruct, a novel generative framework based on Dual-task Inter-dependent Instructions (DII) and Supervised Contrastive Instructions (SCI). Specifically, DII guides large language models (LLMs) to generate labels for one task based on the other task’s labels, thereby explicitly capturing dual-task inter-dependencies. Moreover, SCI leverages utterance semantics differences by guiding LLMs to determine whether a pair of utterances share the same or similar labels. This can improve LLMs on extracting and discriminating task-specific semantics, thus enhancing their SLU reasoning abilities. Extensive experiments on public benchmark datasets show that DC-Instruct markedly outperforms current generative models and state-of-the-art methods, demonstrating its effectiveness in enhancing dialogue language understanding and reasoning. 2024-11-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9692 https://ink.library.smu.edu.sg/context/sis_research/article/10692/viewcontent/2024.emnlp_main.631.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Artificial Intelligence and Robotics Computer Sciences
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Artificial Intelligence and Robotics
Computer Sciences
spellingShingle Artificial Intelligence and Robotics
Computer Sciences
GUO, Shasha
LIAO, Lizi
ZHANG, Jing
LI, Cuiping
CHENG, Hong
PCQPR : Proactive conversational question planning with reflection
description In the realm of multi-intent spoken language understanding, recent advancements have leveraged the potential of prompt learning frameworks. However, critical gaps exist in these frameworks: the lack of explicit modeling of dual-task dependencies and the oversight of task-specific semantic differences among utterances. To address these shortcomings, we propose DC-Instruct, a novel generative framework based on Dual-task Inter-dependent Instructions (DII) and Supervised Contrastive Instructions (SCI). Specifically, DII guides large language models (LLMs) to generate labels for one task based on the other task’s labels, thereby explicitly capturing dual-task inter-dependencies. Moreover, SCI leverages utterance semantics differences by guiding LLMs to determine whether a pair of utterances share the same or similar labels. This can improve LLMs on extracting and discriminating task-specific semantics, thus enhancing their SLU reasoning abilities. Extensive experiments on public benchmark datasets show that DC-Instruct markedly outperforms current generative models and state-of-the-art methods, demonstrating its effectiveness in enhancing dialogue language understanding and reasoning.
format text
author GUO, Shasha
LIAO, Lizi
ZHANG, Jing
LI, Cuiping
CHENG, Hong
author_facet GUO, Shasha
LIAO, Lizi
ZHANG, Jing
LI, Cuiping
CHENG, Hong
author_sort GUO, Shasha
title PCQPR : Proactive conversational question planning with reflection
title_short PCQPR : Proactive conversational question planning with reflection
title_full PCQPR : Proactive conversational question planning with reflection
title_fullStr PCQPR : Proactive conversational question planning with reflection
title_full_unstemmed PCQPR : Proactive conversational question planning with reflection
title_sort pcqpr : proactive conversational question planning with reflection
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9692
https://ink.library.smu.edu.sg/context/sis_research/article/10692/viewcontent/2024.emnlp_main.631.pdf
_version_ 1819113104467820544