Stack operation of tensor networks

The tensor network, as a facterization of tensors, aims at performing the operations that are common for normal tensors, such as addition, contraction and stacking. However, due to its non-unique network structure, only the tensor network contraction is so far well defined. In this paper, we prop...

全面介紹

Saved in:
書目詳細資料
Main Authors: Zhang, Tianning, Chen, Tianqi, Li, Erping, Yang, Bo, Ang, L. K.
其他作者: School of Physical and Mathematical Sciences
格式: Article
語言:English
出版: 2022
主題:
在線閱讀:https://hdl.handle.net/10356/160356
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:The tensor network, as a facterization of tensors, aims at performing the operations that are common for normal tensors, such as addition, contraction and stacking. However, due to its non-unique network structure, only the tensor network contraction is so far well defined. In this paper, we propose a mathematically rigorous definition for the tensor network stack approach, that compress a large amount of tensor networks into a single one without changing their structures and configurations. We illustrate the main ideas with the matrix product states based machine learning as an example. Our results are compared with the for loop and the efficient coding method on both CPU and GPU.