Deep metric based feature engineering to Improve document-level representation for document clustering
Document-level representation attracts more and more research attention. Recent Transformer-based pretrained language models (PLMs) like BERT learn powerful textual representations. These models are originally and inherently designed for word-level tasks, which limits their maximum input length. Cur...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/163261 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Document-level representation attracts more and more research attention. Recent Transformer-based pretrained language models (PLMs) like BERT learn powerful textual representations. These models are originally and inherently designed for word-level tasks, which limits their maximum input length. Current document-level approaches accommodate this limitation through various ways. Some of
them consider the concatenation of the title and the abstract only as the input to the PLM, which neglects the rich inherent semantic information within the main page. Other approaches try to obtain document-level representations by encoding multiple sentences in a document and concatenating them directly. However, the acquired representation may be too redundant, and the training and inference
process are computationally heavy for real-world applications. To alleviate the two drawbacks, we decompose the process from word-level to document-level into a two-stage feature engineering. In the first stage, the sentence-level representations of each sentence in a document is extracted by a PLM from word-level tokens. Then they are concatenated into a document matrix. In the second stage, document matrixs with the semantic information of all text within documents are fed into a CNN model to obtain document-level representations with the dimension reduced 24 times. The model is optimized by a deep metric representation learning objective. Extensive experiments are conducted for hyper-parameter tuning and model design, and for the comparison among different deep metric representation learning objectives. |
---|