Attentive embedding for document representation

With NLP reaching new and greater heights in many real-world applications, researchers are still trying to find better ways for a model to learn document representation. Moreover, most state-of-the-art NLP models have an encoder-decoder like architecture, which looks like an autoencoder architecture...

Full description

Saved in:
Bibliographic Details
Main Author: Tang, Kok Foon
Other Authors: Lihui CHEN
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/149102
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:With NLP reaching new and greater heights in many real-world applications, researchers are still trying to find better ways for a model to learn document representation. Moreover, most state-of-the-art NLP models have an encoder-decoder like architecture, which looks like an autoencoder architecture. Furthermore, KATE is an autoencoder that introduces a competition layer between the encoder and decoder. Hence, this project aims to use the KATE on a more complicated model to determine if KATE's usage provides a more attentive representation of documents. To investigate if KATE could improve document representation, training was implemented with 2 phases. The first phase trains the encoder-decoder models on a sentence reconstruction task, which enabled the model to learn the document representation. And the second phase, a classification task, can validate if the encoder and KATE from the first phase learned a good document representation via a classification task. The two models used for this test are 2-Layer LSTM and ALBERT. Both models were implemented and trained with and without KATE for comparison. The experiment results show that KATE helps in document representation for the 2-Layer LSTM but not for the ALBERT. Therefore, it concluded that KATE has the potential to help document representation for a simpler model like LSTM with minimal implementation cost, but not for a more complicated model like ALBERT.