On the compression of translation operator tensors in FMM-FFT-accelerated SIE simulators via tensor decompositions
Tensor decomposition methodologies are proposed to reduce the memory requirement of translation operator tensors arising in the fast multipole method-fast Fourier transform (FMM-FFT)-accelerated surface integral equation (SIE) simulators. These methodologies leverage Tucker, hierarchical Tucker...
Saved in:
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/159775 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-159775 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1597752022-07-01T08:08:23Z On the compression of translation operator tensors in FMM-FFT-accelerated SIE simulators via tensor decompositions Qian, Cheng Yucel, Abdulkadir C. School of Electrical and Electronic Engineering Engineering::Electrical and electronic engineering Tensors Memory Management Tensor decomposition methodologies are proposed to reduce the memory requirement of translation operator tensors arising in the fast multipole method-fast Fourier transform (FMM-FFT)-accelerated surface integral equation (SIE) simulators. These methodologies leverage Tucker, hierarchical Tucker (H-Tucker), and tensor train (TT) decompositions to compress the FFT'ed translation operator tensors stored in three-dimensional (3D) and four-dimensional (4D) array formats. Extensive numerical tests are performed to demonstrate the memory saving achieved by and computational overhead introduced by these methodologies for different simulation parameters. Numerical results show that the H-Tucker-based methodology for 4D array format yields the maximum memory saving while Tucker-based methodology for 3D array format introduces the minimum computational overhead. For many practical scenarios, all methodologies yield a significant reduction in the memory requirement of translation operator tensors while imposing negligible/acceptable computational overhead. Ministry of Education (MOE) Nanyang Technological University This work was supported in part by the Ministry of Education, Singapore, under Grant AcRF TIER 1-2018-T1-002-077 (RG 176/18) and in part by Nanyang Technological University under a Start-Up Grant. 2022-07-01T08:08:23Z 2022-07-01T08:08:23Z 2020 Journal Article Qian, C. & Yucel, A. C. (2020). On the compression of translation operator tensors in FMM-FFT-accelerated SIE simulators via tensor decompositions. IEEE Transactions On Antennas and Propagation, 69(6), 3359-3370. https://dx.doi.org/10.1109/TAP.2020.3030981 0018-926X https://hdl.handle.net/10356/159775 10.1109/TAP.2020.3030981 2-s2.0-85107350364 6 69 3359 3370 en 2018-T1-002-077 (RG 176/18) IEEE Transactions on Antennas and Propagation © 2020 IEEE. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering Tensors Memory Management |
spellingShingle |
Engineering::Electrical and electronic engineering Tensors Memory Management Qian, Cheng Yucel, Abdulkadir C. On the compression of translation operator tensors in FMM-FFT-accelerated SIE simulators via tensor decompositions |
description |
Tensor decomposition methodologies are proposed to reduce the memory
requirement of translation operator tensors arising in the fast multipole
method-fast Fourier transform (FMM-FFT)-accelerated surface integral equation
(SIE) simulators. These methodologies leverage Tucker, hierarchical Tucker
(H-Tucker), and tensor train (TT) decompositions to compress the FFT'ed
translation operator tensors stored in three-dimensional (3D) and
four-dimensional (4D) array formats. Extensive numerical tests are performed to
demonstrate the memory saving achieved by and computational overhead introduced
by these methodologies for different simulation parameters. Numerical results
show that the H-Tucker-based methodology for 4D array format yields the maximum
memory saving while Tucker-based methodology for 3D array format introduces the
minimum computational overhead. For many practical scenarios, all methodologies
yield a significant reduction in the memory requirement of translation operator
tensors while imposing negligible/acceptable computational overhead. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Qian, Cheng Yucel, Abdulkadir C. |
format |
Article |
author |
Qian, Cheng Yucel, Abdulkadir C. |
author_sort |
Qian, Cheng |
title |
On the compression of translation operator tensors in FMM-FFT-accelerated SIE simulators via tensor decompositions |
title_short |
On the compression of translation operator tensors in FMM-FFT-accelerated SIE simulators via tensor decompositions |
title_full |
On the compression of translation operator tensors in FMM-FFT-accelerated SIE simulators via tensor decompositions |
title_fullStr |
On the compression of translation operator tensors in FMM-FFT-accelerated SIE simulators via tensor decompositions |
title_full_unstemmed |
On the compression of translation operator tensors in FMM-FFT-accelerated SIE simulators via tensor decompositions |
title_sort |
on the compression of translation operator tensors in fmm-fft-accelerated sie simulators via tensor decompositions |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/159775 |
_version_ |
1738844806482755584 |