Explainable natural language processing with matrix product states

Despite empirical successes of recurrent neural networks (RNNs) in natural language processing (NLP), theoretical understanding of RNNs is still limited due to intrinsically complex non-linear computations. We systematically analyze RNNs' behaviors in a ubiquitous NLP task, the sentiment analys...

Full description

Saved in:
Bibliographic Details
Main Author: Tangpanitanon J.
Other Authors: Mahidol University
Format: Article
Published: 2023
Subjects:
Online Access:https://repository.li.mahidol.ac.th/handle/123456789/86920
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Mahidol University
id th-mahidol.86920
record_format dspace
spelling th-mahidol.869202023-06-19T01:16:40Z Explainable natural language processing with matrix product states Tangpanitanon J. Mahidol University Physics and Astronomy Despite empirical successes of recurrent neural networks (RNNs) in natural language processing (NLP), theoretical understanding of RNNs is still limited due to intrinsically complex non-linear computations. We systematically analyze RNNs' behaviors in a ubiquitous NLP task, the sentiment analysis of movie reviews, via the mapping between a class of RNNs called recurrent arithmetic circuits (RACs) and a matrix product state. Using the von-Neumann entanglement entropy (EE) as a proxy for information propagation, we show that single-layer RACs possess a maximum information propagation capacity, reflected by the saturation of the EE. Enlarging the bond dimension beyond the EE saturation threshold does not increase model prediction accuracies, so a minimal model that best estimates the data statistics can be inferred. Although the saturated EE is smaller than the maximum EE allowed by the area law, our minimal model still achieves 1/499% training accuracies in realistic sentiment analysis data sets. Thus, low EE is not a warrant against the adoption of single-layer RACs for NLP. Contrary to a common belief that long-range information propagation is the main source of RNNs' successes, we show that single-layer RACs harness high expressiveness from the subtle interplay between the information propagation and the word vector embeddings. Our work sheds light on the phenomenology of learning in RACs, and more generally on the explainability of RNNs for NLP, using tools from many-body quantum physics. 2023-06-18T18:16:40Z 2023-06-18T18:16:40Z 2022-05-01 Article New Journal of Physics Vol.24 No.5 (2022) 10.1088/1367-2630/ac6232 13672630 2-s2.0-85130487916 https://repository.li.mahidol.ac.th/handle/123456789/86920 SCOPUS
institution Mahidol University
building Mahidol University Library
continent Asia
country Thailand
Thailand
content_provider Mahidol University Library
collection Mahidol University Institutional Repository
topic Physics and Astronomy
spellingShingle Physics and Astronomy
Tangpanitanon J.
Explainable natural language processing with matrix product states
description Despite empirical successes of recurrent neural networks (RNNs) in natural language processing (NLP), theoretical understanding of RNNs is still limited due to intrinsically complex non-linear computations. We systematically analyze RNNs' behaviors in a ubiquitous NLP task, the sentiment analysis of movie reviews, via the mapping between a class of RNNs called recurrent arithmetic circuits (RACs) and a matrix product state. Using the von-Neumann entanglement entropy (EE) as a proxy for information propagation, we show that single-layer RACs possess a maximum information propagation capacity, reflected by the saturation of the EE. Enlarging the bond dimension beyond the EE saturation threshold does not increase model prediction accuracies, so a minimal model that best estimates the data statistics can be inferred. Although the saturated EE is smaller than the maximum EE allowed by the area law, our minimal model still achieves 1/499% training accuracies in realistic sentiment analysis data sets. Thus, low EE is not a warrant against the adoption of single-layer RACs for NLP. Contrary to a common belief that long-range information propagation is the main source of RNNs' successes, we show that single-layer RACs harness high expressiveness from the subtle interplay between the information propagation and the word vector embeddings. Our work sheds light on the phenomenology of learning in RACs, and more generally on the explainability of RNNs for NLP, using tools from many-body quantum physics.
author2 Mahidol University
author_facet Mahidol University
Tangpanitanon J.
format Article
author Tangpanitanon J.
author_sort Tangpanitanon J.
title Explainable natural language processing with matrix product states
title_short Explainable natural language processing with matrix product states
title_full Explainable natural language processing with matrix product states
title_fullStr Explainable natural language processing with matrix product states
title_full_unstemmed Explainable natural language processing with matrix product states
title_sort explainable natural language processing with matrix product states
publishDate 2023
url https://repository.li.mahidol.ac.th/handle/123456789/86920
_version_ 1781413983681511424