Analyzing the Domain Robustness of Pretrained Language Models, Layer by Layer
Adapt-NLP 2021
Saved in:
Main Authors: | Kashyap, Abhinav Ramesh, Mehnaz, Laiba, Malik, Bhavitvya, Waheed, Abdul, Hazarika, Devamanyu, Kan, Min-Yen, Shah, Rajiv Ratn |
---|---|
Other Authors: | DEPARTMENT OF COMPUTER SCIENCE |
Format: | Conference or Workshop Item |
Published: |
2021
|
Online Access: | https://scholarbank.nus.edu.sg/handle/10635/194772 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | National University of Singapore |
Similar Items
-
Domain Divergences: A Survey and Empirical Analysis
by: Ramesh Kashyap, Abhinav, et al.
Published: (2021) -
So Different Yet So Alike! Constrained Unsupervised Text Style Transfer
by: Ramesh Kashyap, Abhinav, et al.
Published: (2022) -
APPLICATIONS OF DOMAIN DIVERGENCES FOR DOMAIN ADAPTATION IN NLP
by: ABHINAV RAMESH KASHYAP
Published: (2023) -
Data-efficient domain adaptation for pretrained language models
by: Guo, Xu
Published: (2023) -
SciWING– A Software Toolkit for Scientific Document Processing
by: Ramesh Kashyap, Abhinav, et al.
Published: (2021)