Latent-optimized adversarial neural transfer for sarcasm detection

The existence of multiple datasets for sarcasm detection prompts us to apply transfer learning to exploit their commonality. The adversarial neural transfer (ANT) framework utilizes multiple loss terms that encourage the source-domain and the target-domain feature distributions to be similar while o...

Full description

Saved in:
Bibliographic Details
Main Authors: Guo, Xu, Li, Boyang, Yu, Han, Miao, Chunyan
Other Authors: School of Computer Science and Engineering
Format: Conference or Workshop Item
Language:English
Published: 2021
Subjects:
Online Access:https://aclanthology.org/volumes/2021.naacl-main/
https://hdl.handle.net/10356/153544
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-153544
record_format dspace
spelling sg-ntu-dr.10356-1535442021-12-12T07:13:26Z Latent-optimized adversarial neural transfer for sarcasm detection Guo, Xu Li, Boyang Yu, Han Miao, Chunyan School of Computer Science and Engineering Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY) Engineering::Computer science and engineering Transfer Learning Deep Learning Optimization Sarcasm Detection The existence of multiple datasets for sarcasm detection prompts us to apply transfer learning to exploit their commonality. The adversarial neural transfer (ANT) framework utilizes multiple loss terms that encourage the source-domain and the target-domain feature distributions to be similar while optimizing for domain-specific performance. However, these objectives may be in conflict, which can lead to optimization difficulties and sometimes diminished transfer. We propose a generalized latent optimization strategy that allows different losses to accommodate each other and improves training dynamics. The proposed method outperforms transfer learning and meta-learning baselines. In particular, we achieve 10.02% absolute performance gain over the previous state of the art on the iSarcasm dataset. AI Singapore Nanyang Technological University National Research Foundation (NRF) Published version This research is supported by the National Research Foundation, Singapore under its the AI Singapore Programme (AISG2-RP-2020-019), NRF Investigatorship (NRF-NRFI05-2019-0002), and NRF Fellowship (NRF-NRFF13-2021-0006); the Joint NTU-WeBank Research Centre on Fintech (NWJ-2020-008); the Nanyang Assistant/Associate Professorships (NAP); the RIE 2020 Advanced Manufacturing and Engineering Programmatic Fund (A20G8b0102), Singapore; NTU-SDU-CFAIR (NSC-2019-011). 2021-12-12T07:12:13Z 2021-12-12T07:12:13Z 2021 Conference Paper Guo, X., Li, B., Yu, H. & Miao, C. (2021). Latent-optimized adversarial neural transfer for sarcasm detection. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 5394-5407. https://aclanthology.org/volumes/2021.naacl-main/ https://hdl.handle.net/10356/153544 5394 5407 en AISG2-RP-2020-019 NRF-NRFI05-2019-0002 NRF-NRFF13-2021-0006 NWJ2020-008 A20G8b0102 NSC-2019-011 © 2021 Association for Computational Linguistics. This is an open-access article distributed under the terms of the Creative Commons Attribution License. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Transfer Learning
Deep Learning Optimization
Sarcasm Detection
spellingShingle Engineering::Computer science and engineering
Transfer Learning
Deep Learning Optimization
Sarcasm Detection
Guo, Xu
Li, Boyang
Yu, Han
Miao, Chunyan
Latent-optimized adversarial neural transfer for sarcasm detection
description The existence of multiple datasets for sarcasm detection prompts us to apply transfer learning to exploit their commonality. The adversarial neural transfer (ANT) framework utilizes multiple loss terms that encourage the source-domain and the target-domain feature distributions to be similar while optimizing for domain-specific performance. However, these objectives may be in conflict, which can lead to optimization difficulties and sometimes diminished transfer. We propose a generalized latent optimization strategy that allows different losses to accommodate each other and improves training dynamics. The proposed method outperforms transfer learning and meta-learning baselines. In particular, we achieve 10.02% absolute performance gain over the previous state of the art on the iSarcasm dataset.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Guo, Xu
Li, Boyang
Yu, Han
Miao, Chunyan
format Conference or Workshop Item
author Guo, Xu
Li, Boyang
Yu, Han
Miao, Chunyan
author_sort Guo, Xu
title Latent-optimized adversarial neural transfer for sarcasm detection
title_short Latent-optimized adversarial neural transfer for sarcasm detection
title_full Latent-optimized adversarial neural transfer for sarcasm detection
title_fullStr Latent-optimized adversarial neural transfer for sarcasm detection
title_full_unstemmed Latent-optimized adversarial neural transfer for sarcasm detection
title_sort latent-optimized adversarial neural transfer for sarcasm detection
publishDate 2021
url https://aclanthology.org/volumes/2021.naacl-main/
https://hdl.handle.net/10356/153544
_version_ 1720447170155380736