Motive classification using deep learning approaches
Automated implicit motive classification can be formulated as a natural language processing (NLP) task. With the fast development of computational abilities, complex NLP models have been developed, contributing to the classification accuracy. In this dissertation, we study several different deep le...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Theses and Dissertations |
Language: | English |
Published: |
2019
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/78825 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-78825 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-788252023-07-04T16:07:40Z Motive classification using deep learning approaches Xu, Jiahao Chen Lihui School of Electrical and Electronic Engineering Engineering::Electrical and electronic engineering Automated implicit motive classification can be formulated as a natural language processing (NLP) task. With the fast development of computational abilities, complex NLP models have been developed, contributing to the classification accuracy. In this dissertation, we study several different deep learning models such as Long Short-Term Memory (LSTM) model, Gated recurrent unit (GRU) model, Bidirectional GRU model, Transformer model and Bidirectional Encoder Representations from Transformers (BERT) model for implicit motive classifications. The architecture of each of those models are reviewed, illustrated, and several motive classification models are implemented based on them. The performances of those models are evaluated and compared with each other on some benchmark datasets, and measured in Precision, Recall and F1 score. From the experimental studies, we concluded that base-BERT model demonstrates the best performance on the dataset. Large-BERT model has the secondary best performance on the dataset. However, it requires the training for the largest number of parameters among those models and the most training time. Bidirectional GRU model has the third best performance among those models, which does not require so much computing power and training time in server. And simple GRU model has the worst performance, which is in accordance with the theoretical analysis, because it has the simplest structure, only one GRU layer and no use of reverse time information. This report states the methodologies and implementation details used in the experiments, followed by discussions and analysis of the obtained results. Master of Science (Signal Processing) 2019-07-02T01:50:51Z 2019-07-02T01:50:51Z 2019 Thesis http://hdl.handle.net/10356/78825 en 55 p. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering |
spellingShingle |
Engineering::Electrical and electronic engineering Xu, Jiahao Motive classification using deep learning approaches |
description |
Automated implicit motive classification can be formulated as a natural language processing (NLP) task. With the fast development of computational abilities, complex NLP models have been developed, contributing to the classification accuracy.
In this dissertation, we study several different deep learning models such as Long Short-Term Memory (LSTM) model, Gated recurrent unit (GRU) model, Bidirectional GRU model, Transformer model and Bidirectional Encoder Representations from Transformers (BERT) model for implicit motive classifications. The architecture of each of those models are reviewed, illustrated, and several motive classification models are implemented based on them. The performances of those models are evaluated and compared with each other on some benchmark datasets, and measured in Precision, Recall and F1 score. From the experimental studies, we concluded that base-BERT model demonstrates the best performance on the dataset. Large-BERT model has the secondary best performance on the dataset. However, it requires the training for the largest number of parameters among those models and the most training time. Bidirectional GRU model has the third best performance among those models, which does not require so much computing power and training time in server. And simple GRU model has the worst performance, which is in accordance with the theoretical analysis, because it has the simplest structure, only one GRU layer and no use of reverse time information.
This report states the methodologies and implementation details used in the experiments, followed by discussions and analysis of the obtained results. |
author2 |
Chen Lihui |
author_facet |
Chen Lihui Xu, Jiahao |
format |
Theses and Dissertations |
author |
Xu, Jiahao |
author_sort |
Xu, Jiahao |
title |
Motive classification using deep learning approaches |
title_short |
Motive classification using deep learning approaches |
title_full |
Motive classification using deep learning approaches |
title_fullStr |
Motive classification using deep learning approaches |
title_full_unstemmed |
Motive classification using deep learning approaches |
title_sort |
motive classification using deep learning approaches |
publishDate |
2019 |
url |
http://hdl.handle.net/10356/78825 |
_version_ |
1772828467027509248 |