Dimension reduction in recurrent networks by canonicalization
Many recurrent neural network machine learning paradigms can be formulated using state-space representations. The classical notion of canonical state-space realization is adapted in this paper to accommodate semi-infinite inputs so that it can be used as a dimension reduction tool in the recurrent n...
Saved in:
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/161577 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-161577 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1615772023-02-28T20:07:39Z Dimension reduction in recurrent networks by canonicalization Grigoryeva, Lyudmila Ortega, Juan-Pablo School of Physical and Mathematical Sciences Science::Mathematics Recurrent Neural Network Reservoir Computing Many recurrent neural network machine learning paradigms can be formulated using state-space representations. The classical notion of canonical state-space realization is adapted in this paper to accommodate semi-infinite inputs so that it can be used as a dimension reduction tool in the recurrent networks setup. The so-called input forgetting property is identified as the key hypothesis that guarantees the existence and uniqueness (up to system isomorphisms) of canonical realizations for causal and time-invariant input/output systems with semi-infinite inputs. Additionally, the notion of optimal reduction coming from the theory of symmetric Hamiltonian systems is implemented in our setup to construct canonical realizations out of input forgetting but not necessarily canonical ones. These two procedures are studied in detail in the framework of linear fading memory input/output systems. Finally, the notion of implicit reduction using reproducing kernel Hilbert spaces (RKHS) is introduced which allows, for systems with linear readouts, to achieve dimension reduction without the need to actually compute the reduced spaces introduced in the first part of the paper. Submitted/Accepted version JPO acknowledges partial financial support coming from the Research Commission of the Universit¨at Sankt Gallen and the Swiss National Science Foundation (grant number 200021 175801/1). 2022-09-09T02:23:10Z 2022-09-09T02:23:10Z 2021 Journal Article Grigoryeva, L. & Ortega, J. (2021). Dimension reduction in recurrent networks by canonicalization. Journal of Geometric Mechanics, 13(4), 647-677. https://dx.doi.org/10.3934/jgm.2021028 1941-4889 https://hdl.handle.net/10356/161577 10.3934/jgm.2021028 2-s2.0-85122382302 4 13 647 677 en Journal of Geometric Mechanics © 2022 American Institute of Mathematical Sciences. All rights reserved. This article has been published in a revised form in Journal of Geometric Mechanics (http://dx.doi.org/10.3934/jgm.2021028). This version is free to download for private research and study only. Not for redistribution, re-sale or use in derivative works. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Science::Mathematics Recurrent Neural Network Reservoir Computing |
spellingShingle |
Science::Mathematics Recurrent Neural Network Reservoir Computing Grigoryeva, Lyudmila Ortega, Juan-Pablo Dimension reduction in recurrent networks by canonicalization |
description |
Many recurrent neural network machine learning paradigms can be formulated using state-space representations. The classical notion of canonical state-space realization is adapted in this paper to accommodate semi-infinite inputs so that it can be used as a dimension reduction tool in the recurrent networks setup. The so-called input forgetting property is identified as the key hypothesis that guarantees the existence and uniqueness (up to system isomorphisms) of canonical realizations for causal and time-invariant input/output systems with semi-infinite inputs. Additionally, the notion of optimal reduction coming from the theory of symmetric Hamiltonian systems is implemented in our setup to construct canonical realizations out of input forgetting but not necessarily canonical ones. These two procedures are studied in detail in the framework of linear fading memory input/output systems. Finally, the notion of implicit reduction using reproducing kernel Hilbert spaces (RKHS) is introduced which allows, for systems with linear readouts, to achieve dimension reduction without the need to actually compute the reduced spaces introduced in the first part of the paper. |
author2 |
School of Physical and Mathematical Sciences |
author_facet |
School of Physical and Mathematical Sciences Grigoryeva, Lyudmila Ortega, Juan-Pablo |
format |
Article |
author |
Grigoryeva, Lyudmila Ortega, Juan-Pablo |
author_sort |
Grigoryeva, Lyudmila |
title |
Dimension reduction in recurrent networks by canonicalization |
title_short |
Dimension reduction in recurrent networks by canonicalization |
title_full |
Dimension reduction in recurrent networks by canonicalization |
title_fullStr |
Dimension reduction in recurrent networks by canonicalization |
title_full_unstemmed |
Dimension reduction in recurrent networks by canonicalization |
title_sort |
dimension reduction in recurrent networks by canonicalization |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/161577 |
_version_ |
1759853050786217984 |