Deep metric based feature engineering to Improve document-level representation for document clustering

Document-level representation attracts more and more research attention. Recent Transformer-based pretrained language models (PLMs) like BERT learn powerful textual representations. These models are originally and inherently designed for word-level tasks, which limits their maximum input length. Cur...

全面介紹

Saved in:
書目詳細資料
主要作者: Xu, Liwen
其他作者: Lihui Chen
格式: Thesis-Master by Coursework
語言:English
出版: Nanyang Technological University 2022
主題:
在線閱讀:https://hdl.handle.net/10356/163261
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:Document-level representation attracts more and more research attention. Recent Transformer-based pretrained language models (PLMs) like BERT learn powerful textual representations. These models are originally and inherently designed for word-level tasks, which limits their maximum input length. Current document-level approaches accommodate this limitation through various ways. Some of them consider the concatenation of the title and the abstract only as the input to the PLM, which neglects the rich inherent semantic information within the main page. Other approaches try to obtain document-level representations by encoding multiple sentences in a document and concatenating them directly. However, the acquired representation may be too redundant, and the training and inference process are computationally heavy for real-world applications. To alleviate the two drawbacks, we decompose the process from word-level to document-level into a two-stage feature engineering. In the first stage, the sentence-level representations of each sentence in a document is extracted by a PLM from word-level tokens. Then they are concatenated into a document matrix. In the second stage, document matrixs with the semantic information of all text within documents are fed into a CNN model to obtain document-level representations with the dimension reduced 24 times. The model is optimized by a deep metric representation learning objective. Extensive experiments are conducted for hyper-parameter tuning and model design, and for the comparison among different deep metric representation learning objectives.