Multi-level head-wise match and aggregation in transformer for textual sequence matching

Transformer has been successfully applied to many natural language processing tasks. However, for textual sequence matching, simple matching between the representation of a pair of sequences might bring in unnecessary noise. In this paper, we propose a new approach to sequence pair matching with Tra...

全面介紹

Saved in:
書目詳細資料
Main Authors: WANG, Shuohang, LAN, Yunshi, TAY, Yi, JIANG, Jing, LIU, Jingjing
格式: text
語言:English
出版: Institutional Knowledge at Singapore Management University 2020
主題:
在線閱讀:https://ink.library.smu.edu.sg/sis_research/5601
https://ink.library.smu.edu.sg/context/sis_research/article/6604/viewcontent/AAAI_2020b.pdf
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Transformer has been successfully applied to many natural language processing tasks. However, for textual sequence matching, simple matching between the representation of a pair of sequences might bring in unnecessary noise. In this paper, we propose a new approach to sequence pair matching with Transformer, by learning head-wise matching representations on multiple levels. Experiments show that our proposed approach can achieve new state-of-the-art performance on multiple tasks that rely only on pre-computed sequence-vectorrepresentation, such as SNLI, MNLI-match, MNLI-mismatch, QQP, and SQuAD-binary