An LSTM model for cloze-style machine comprehension
Machine comprehension is concerned with teaching machines to answer reading comprehension questions. In this paper we adopt an LSTM-based model we designed earlier for textual entailment and propose two new models for cloze-style machine comprehension. In our first model, we treat the document as a...
Saved in:
Main Authors: | , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2018
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/4084 https://ink.library.smu.edu.sg/context/sis_research/article/5087/viewcontent/13._Jul042018___An_LSTM_Model_for_Cloze_Style_Machine_Comprehension__CICling2018_.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | Machine comprehension is concerned with teaching machines to answer reading comprehension questions. In this paper we adopt an LSTM-based model we designed earlier for textual entailment and propose two new models for cloze-style machine comprehension. In our first model, we treat the document as a premise and the question as a hypothesis, and use an LSTM with attention mechanisms to match the question with the document. This LSTM remembers the best answer token found in the document while processing the question. Furthermore, we observe some special properties of machine comprehension and propose a two-layer LSTM model. In this model, we treat the question as a premise and use LSTMs to match each sentence in the document with the question. We further chain up the final states of these LSTMs using another LSTM in order to aggregate the results. When evaluated on the commonly used CNN/Daily Mail dataset, both of our models are quite competitive compared with the state of the art, and the second two-layer model outperforms the first model. |
---|