Modeling speech acts in asynchronous conversations : a neural-CRF approach
Participants in an asynchronous conversation (e.g., forum, e-mail) interact with each other at different times, performing certain communicative acts, called speech acts (e.g., question, request). In this article, we propose a hybrid approach to speech act recognition in asynchronous conversations....
Saved in:
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2019
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/105468 http://hdl.handle.net/10220/48696 http://dx.doi.org/10.1162/coli_a_00339 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-105468 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1054682019-12-06T21:51:57Z Modeling speech acts in asynchronous conversations : a neural-CRF approach Shafiq Joty Tasnim Mohiuddin School of Computer Science and Engineering DRNTU::Engineering::Computer science and engineering Asynchronous Conversations Modeling Speech Participants in an asynchronous conversation (e.g., forum, e-mail) interact with each other at different times, performing certain communicative acts, called speech acts (e.g., question, request). In this article, we propose a hybrid approach to speech act recognition in asynchronous conversations. Our approach works in two main steps: a long short-term memory recurrent neural network (LSTM-RNN) first encodes each sentence separately into a task-specific distributed representation, and this is then used in a conditional random field (CRF) model to capture the conversational dependencies between sentences. The LSTM-RNN model uses pretrained word embeddings learned from a large conversational corpus and is trained to classify sentences into speech act types. The CRF model can consider arbitrary graph structures to model conversational dependencies in an asynchronous conversation. In addition, to mitigate the problem of limited annotated data in the asynchronous domains, we adapt the LSTM-RNN model to learn from synchronous conversations (e.g., meetings), using domain adversarial training of neural networks. Empirical evaluation shows the effectiveness of our approach over existing ones: (i) LSTM-RNNs provide better task-specific representations, (ii) conversational word embeddings benefit the LSTM-RNNs more than the off-the-shelf ones, (iii) adversarial training gives better domain-invariant representations, and (iv) the global CRF model improves over local models. Published version 2019-06-13T02:00:00Z 2019-12-06T21:51:57Z 2019-06-13T02:00:00Z 2019-12-06T21:51:57Z 2018 Journal Article Shafiq Joty, & Tasnim Mohiuddin. (2018). Modeling speech acts in asynchronous conversations : a neural-CRF approach. Computational Linguistics, 44(4), 859-894. doi:10.1162/coli_a_00339 0891-2017 https://hdl.handle.net/10356/105468 http://hdl.handle.net/10220/48696 http://dx.doi.org/10.1162/coli_a_00339 en Computational Linguistics © 2018 Association for Computational Linguistics. This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits you to copy and redistribute in any medium or format, for non-commercial use only, provided that the original work is not remixed, transformed, or built upon, and that appropriate credit to the original source is given. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode. 36 p. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
DRNTU::Engineering::Computer science and engineering Asynchronous Conversations Modeling Speech |
spellingShingle |
DRNTU::Engineering::Computer science and engineering Asynchronous Conversations Modeling Speech Shafiq Joty Tasnim Mohiuddin Modeling speech acts in asynchronous conversations : a neural-CRF approach |
description |
Participants in an asynchronous conversation (e.g., forum, e-mail) interact with each other at different times, performing certain communicative acts, called speech acts (e.g., question, request). In this article, we propose a hybrid approach to speech act recognition in asynchronous conversations. Our approach works in two main steps: a long short-term memory recurrent neural network (LSTM-RNN) first encodes each sentence separately into a task-specific distributed representation, and this is then used in a conditional random field (CRF) model to capture the conversational dependencies between sentences. The LSTM-RNN model uses pretrained word embeddings learned from a large conversational corpus and is trained to classify sentences into speech act types. The CRF model can consider arbitrary graph structures to model conversational dependencies in an asynchronous conversation. In addition, to mitigate the problem of limited annotated data in the asynchronous domains, we adapt the LSTM-RNN model to learn from synchronous conversations (e.g., meetings), using domain adversarial training of neural networks. Empirical evaluation shows the effectiveness of our approach over existing ones: (i) LSTM-RNNs provide better task-specific representations, (ii) conversational word embeddings benefit the LSTM-RNNs more than the off-the-shelf ones, (iii) adversarial training gives better domain-invariant representations, and (iv) the global CRF model improves over local models. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Shafiq Joty Tasnim Mohiuddin |
format |
Article |
author |
Shafiq Joty Tasnim Mohiuddin |
author_sort |
Shafiq Joty |
title |
Modeling speech acts in asynchronous conversations : a neural-CRF approach |
title_short |
Modeling speech acts in asynchronous conversations : a neural-CRF approach |
title_full |
Modeling speech acts in asynchronous conversations : a neural-CRF approach |
title_fullStr |
Modeling speech acts in asynchronous conversations : a neural-CRF approach |
title_full_unstemmed |
Modeling speech acts in asynchronous conversations : a neural-CRF approach |
title_sort |
modeling speech acts in asynchronous conversations : a neural-crf approach |
publishDate |
2019 |
url |
https://hdl.handle.net/10356/105468 http://hdl.handle.net/10220/48696 http://dx.doi.org/10.1162/coli_a_00339 |
_version_ |
1681043731679019008 |