Transformers acceleration on autoNLP document classification

Unsupervised pre-training has been widely used in the field of Natural Language Processing, by training a huge network with unsupervised prediction tasks, one of the representatives is the BERT model. BERT has achieved great success in various NLP downstream tasks by reaching state-of-the-art result...

Full description

Saved in:
Bibliographic Details
Main Author: Cao, Hannan
Other Authors: Sinno Jialin Pan
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2020
Subjects:
Online Access:https://hdl.handle.net/10356/138506
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Unsupervised pre-training has been widely used in the field of Natural Language Processing, by training a huge network with unsupervised prediction tasks, one of the representatives is the BERT model. BERT has achieved great success in various NLP downstream tasks by reaching state-of-the-art result on major NLP tasks. However, BERT has used more than 110M parameters, which requires a huge amount of training time and computing resources. Therefore, weight reduction is becoming critical to train BERT efficiently. In this Final Year Project, we first explored the BERT performance in the field of Document Classification. We then proposed a new method to reduce the BERT’s weight as well as the training time with the help of weight pruning method, our experiment shows that our new method could reduce the training time required by about 20%, and achieved higher performance comparing to the original BERT method. We also applied the ensemble method to these pruned networks to further increase the model’s performance and has improved the baseline about 2% for the AAPD, Reuters and IMDB datasets.