Keyword-guided neural conversational model
We study the problem of imposing conversational goals/keywords on open-domain conversational agents, where the agent is required to lead the conversation to a target keyword smoothly and fast. Solving this problem enables the application of conversational agents in many real-world scenarios, e.g....
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2021
|
Subjects: | |
Online Access: | https://ojs.aaai.org/index.php/AAAI/issue/archive https://hdl.handle.net/10356/152721 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | We study the problem of imposing conversational goals/keywords on open-domain
conversational agents, where the agent is required to lead the conversation to
a target keyword smoothly and fast. Solving this problem enables the
application of conversational agents in many real-world scenarios, e.g.,
recommendation and psychotherapy. The dominant paradigm for tackling this
problem is to 1) train a next-turn keyword classifier, and 2) train a
keyword-augmented response retrieval model. However, existing approaches in
this paradigm have two limitations: 1) the training and evaluation datasets for
next-turn keyword classification are directly extracted from conversations
without human annotations, thus, they are noisy and have low correlation with
human judgements, and 2) during keyword transition, the agents solely rely on
the similarities between word embeddings to move closer to the target keyword,
which may not reflect how humans converse. In this paper, we assume that human
conversations are grounded on commonsense and propose a keyword-guided neural
conversational model that can leverage external commonsense knowledge graphs
(CKG) for both keyword transition and response retrieval. Automatic evaluations
suggest that commonsense improves the performance of both next-turn keyword
prediction and keyword-augmented response retrieval. In addition, both
self-play and human evaluations show that our model produces responses with
smoother keyword transition and reaches the target keyword faster than
competitive baselines. |
---|