Exploring the use of pre-trained transformer-based models and semi-supervised learning to build training sets for text classification

Data annotation is the process of labeling text, images, or other types of content for machine learning tasks. With the rise in popularity of machine learning for classification tasks, large amounts of labeled data is typically desired to train effective models using different algorithms and archite...

全面介紹

Saved in:
書目詳細資料
主要作者: Te, Gian Marco I.
格式: text
語言:English
出版: Animo Repository 2022
主題:
在線閱讀:https://animorepository.dlsu.edu.ph/etdm_softtech/6
https://animorepository.dlsu.edu.ph/cgi/viewcontent.cgi?article=1005&context=etdm_softtech
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: De La Salle University
語言: English
實物特徵
總結:Data annotation is the process of labeling text, images, or other types of content for machine learning tasks. With the rise in popularity of machine learning for classification tasks, large amounts of labeled data is typically desired to train effective models using different algorithms and architectures. Data annotation is a critical step in developing these models and, while there is an abundance of unlabeled data that is being generated everyday, annotation is often a laborious and costly process. Furthermore, low-resource languages such as Filipino do not have as many readily available datasets as mainstream languages that can be leveraged to fine-tune existing models that were pre-trained with large amounts of data. In this study, we explored the use of BERT and semi-supervised learning for textual data in order to see how it might ease the burden of human annotation when building text classification training sets and at the same time reduce the amount of manually-labeled data needed to fine-tune a pre-trained model for a specific downstream text classification task. We then analyzed relevant factors that may affect pseudo-labeling performance, and also compared the accuracy scores of different non-BERT classifiers when trained with the same samples having solely human-labeled data versus its counterpart composed of a mixture of human-labeled data and pseudo-labeled data after semi-supervised learning.