An empirical study of memorization in NLP
A recent study by Feldman (2020) proposed a long-tail theory to explain the memorization behavior of deep learning models. However, memorization has not been empirically verified in the context of NLP, a gap addressed by this work. In this paper, we use three different NLP tasks to check if the long...
Saved in:
Main Authors: | ZHENG, Xiaosen, JIANG, Jing |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7705 https://ink.library.smu.edu.sg/context/sis_research/article/8708/viewcontent/2022.acl_long.434.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Enhancing contextual understanding in NLP: adapting state-of-the-art models for improved sentiment analysis of informal language
by: Sneha Ravisankar
Published: (2024) -
Leveraging linguistic knowledge to enhance low-resource NLP applications
by: Zhu, Zixiao
Published: (2025) -
Event extraction and beyond: from conventional NLP to large language models
by: Zhou, Hanzhang
Published: (2025) -
Automatically extracting templates from examples for NLP tasks
by: Ong, Ethel, et al.
Published: (2008) -
Text analytics, NLP, and accounting research
by: CROWLEY, Richard M.
Published: (2020)