Data-efficient domain adaptation for pretrained language models
Recent advances in Natural Language Processing (NLP) are built on a range of large-scale pretrained language models (PLMs), which are based on deep transformer neural networks. These PLMs simultaneously learn contextualized word representations and language modeling by training the entire model on m...
Saved in:
Main Author: | Guo, Xu |
---|---|
Other Authors: | Yu Han |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/167965 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Extracting event knowledge from pretrained language models
by: Ong, Claudia Beth
Published: (2023) -
Analyzing the Domain Robustness of Pretrained Language Models, Layer by Layer
by: Kashyap, Abhinav Ramesh, et al.
Published: (2021) -
Model-driven smart contract generation leveraging pretrained large language models
by: Jiang, Qinbo
Published: (2024) -
Code problem similarity detection using code clones and pretrained models
by: Yeo, Geremie Yun Siang
Published: (2023) -
Language model domain adaptation for automatic speech recognition systems
by: Khassanov, Yerbolat
Published: (2020)