Transformers acceleration on autoNLP document classification

Unsupervised pre-training has been widely used in the field of Natural Language Processing, by training a huge network with unsupervised prediction tasks, one of the representatives is the BERT model. BERT has achieved great success in various NLP downstream tasks by reaching state-of-the-art result...

Full description

Saved in:
Bibliographic Details
Main Author: Cao, Hannan
Other Authors: Sinno Jialin Pan
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2020
Subjects:
Online Access:https://hdl.handle.net/10356/138506
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-138506
record_format dspace
spelling sg-ntu-dr.10356-1385062020-05-07T10:43:13Z Transformers acceleration on autoNLP document classification Cao, Hannan Sinno Jialin Pan School of Computer Science and Engineering sinnopan@ntu.edu.sg Engineering::Computer science and engineering Unsupervised pre-training has been widely used in the field of Natural Language Processing, by training a huge network with unsupervised prediction tasks, one of the representatives is the BERT model. BERT has achieved great success in various NLP downstream tasks by reaching state-of-the-art result on major NLP tasks. However, BERT has used more than 110M parameters, which requires a huge amount of training time and computing resources. Therefore, weight reduction is becoming critical to train BERT efficiently. In this Final Year Project, we first explored the BERT performance in the field of Document Classification. We then proposed a new method to reduce the BERT’s weight as well as the training time with the help of weight pruning method, our experiment shows that our new method could reduce the training time required by about 20%, and achieved higher performance comparing to the original BERT method. We also applied the ensemble method to these pruned networks to further increase the model’s performance and has improved the baseline about 2% for the AAPD, Reuters and IMDB datasets. Bachelor of Engineering (Computer Science) 2020-05-07T10:43:13Z 2020-05-07T10:43:13Z 2020 Final Year Project (FYP) https://hdl.handle.net/10356/138506 en SCSE19-0274 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic Engineering::Computer science and engineering
spellingShingle Engineering::Computer science and engineering
Cao, Hannan
Transformers acceleration on autoNLP document classification
description Unsupervised pre-training has been widely used in the field of Natural Language Processing, by training a huge network with unsupervised prediction tasks, one of the representatives is the BERT model. BERT has achieved great success in various NLP downstream tasks by reaching state-of-the-art result on major NLP tasks. However, BERT has used more than 110M parameters, which requires a huge amount of training time and computing resources. Therefore, weight reduction is becoming critical to train BERT efficiently. In this Final Year Project, we first explored the BERT performance in the field of Document Classification. We then proposed a new method to reduce the BERT’s weight as well as the training time with the help of weight pruning method, our experiment shows that our new method could reduce the training time required by about 20%, and achieved higher performance comparing to the original BERT method. We also applied the ensemble method to these pruned networks to further increase the model’s performance and has improved the baseline about 2% for the AAPD, Reuters and IMDB datasets.
author2 Sinno Jialin Pan
author_facet Sinno Jialin Pan
Cao, Hannan
format Final Year Project
author Cao, Hannan
author_sort Cao, Hannan
title Transformers acceleration on autoNLP document classification
title_short Transformers acceleration on autoNLP document classification
title_full Transformers acceleration on autoNLP document classification
title_fullStr Transformers acceleration on autoNLP document classification
title_full_unstemmed Transformers acceleration on autoNLP document classification
title_sort transformers acceleration on autonlp document classification
publisher Nanyang Technological University
publishDate 2020
url https://hdl.handle.net/10356/138506
_version_ 1681057257493626880