Attentive embedding for document representation

With NLP reaching new and greater heights in many real-world applications, researchers are still trying to find better ways for a model to learn document representation. Moreover, most state-of-the-art NLP models have an encoder-decoder like architecture, which looks like an autoencoder architecture...

Full description

Saved in:
Bibliographic Details
Main Author: Tang, Kok Foon
Other Authors: Lihui CHEN
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/149102
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-149102
record_format dspace
spelling sg-ntu-dr.10356-1491022023-07-07T17:31:43Z Attentive embedding for document representation Tang, Kok Foon Lihui CHEN School of Electrical and Electronic Engineering Lihui Chen ELHCHEN@ntu.edu.sg Engineering::Electrical and electronic engineering With NLP reaching new and greater heights in many real-world applications, researchers are still trying to find better ways for a model to learn document representation. Moreover, most state-of-the-art NLP models have an encoder-decoder like architecture, which looks like an autoencoder architecture. Furthermore, KATE is an autoencoder that introduces a competition layer between the encoder and decoder. Hence, this project aims to use the KATE on a more complicated model to determine if KATE's usage provides a more attentive representation of documents. To investigate if KATE could improve document representation, training was implemented with 2 phases. The first phase trains the encoder-decoder models on a sentence reconstruction task, which enabled the model to learn the document representation. And the second phase, a classification task, can validate if the encoder and KATE from the first phase learned a good document representation via a classification task. The two models used for this test are 2-Layer LSTM and ALBERT. Both models were implemented and trained with and without KATE for comparison. The experiment results show that KATE helps in document representation for the 2-Layer LSTM but not for the ALBERT. Therefore, it concluded that KATE has the potential to help document representation for a simpler model like LSTM with minimal implementation cost, but not for a more complicated model like ALBERT. Bachelor of Engineering (Information Engineering and Media) 2021-05-26T13:03:03Z 2021-05-26T13:03:03Z 2021 Final Year Project (FYP) Tang, K. F. (2021). Attentive embedding for document representation. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/149102 https://hdl.handle.net/10356/149102 en A3046-201 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
spellingShingle Engineering::Electrical and electronic engineering
Tang, Kok Foon
Attentive embedding for document representation
description With NLP reaching new and greater heights in many real-world applications, researchers are still trying to find better ways for a model to learn document representation. Moreover, most state-of-the-art NLP models have an encoder-decoder like architecture, which looks like an autoencoder architecture. Furthermore, KATE is an autoencoder that introduces a competition layer between the encoder and decoder. Hence, this project aims to use the KATE on a more complicated model to determine if KATE's usage provides a more attentive representation of documents. To investigate if KATE could improve document representation, training was implemented with 2 phases. The first phase trains the encoder-decoder models on a sentence reconstruction task, which enabled the model to learn the document representation. And the second phase, a classification task, can validate if the encoder and KATE from the first phase learned a good document representation via a classification task. The two models used for this test are 2-Layer LSTM and ALBERT. Both models were implemented and trained with and without KATE for comparison. The experiment results show that KATE helps in document representation for the 2-Layer LSTM but not for the ALBERT. Therefore, it concluded that KATE has the potential to help document representation for a simpler model like LSTM with minimal implementation cost, but not for a more complicated model like ALBERT.
author2 Lihui CHEN
author_facet Lihui CHEN
Tang, Kok Foon
format Final Year Project
author Tang, Kok Foon
author_sort Tang, Kok Foon
title Attentive embedding for document representation
title_short Attentive embedding for document representation
title_full Attentive embedding for document representation
title_fullStr Attentive embedding for document representation
title_full_unstemmed Attentive embedding for document representation
title_sort attentive embedding for document representation
publisher Nanyang Technological University
publishDate 2021
url https://hdl.handle.net/10356/149102
_version_ 1772827146901782528