FE-RNN: a fuzzy embedded recurrent neural network for improving interpretability of underlying neural network
Deep learning enables effective predictions. But deep structures face some challenges on human interpretability compared to conventional techniques, e.g., fuzzy inference systems. It motivates more research works to alleviate the black box nature of deep structures with performance maintained. This...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/178712 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-178712 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1787122024-07-05T15:35:56Z FE-RNN: a fuzzy embedded recurrent neural network for improving interpretability of underlying neural network Tan, James Chee Min Cao, Qi Quek, Chai School of Computer Science and Engineering Computer and Information Science Data driven implication Deep neural networks Deep learning enables effective predictions. But deep structures face some challenges on human interpretability compared to conventional techniques, e.g., fuzzy inference systems. It motivates more research works to alleviate the black box nature of deep structures with performance maintained. This paper proposes a fuzzy-embedded recurrent neural network (FE-RNN) to improve interpretability of the underlying neural networks. It is a parallel deep structure comprising an RNN and a Pseudo Outer-Product based Fuzzy Neural Network (POPFNN) that share a common set of input and output linguistic concepts. The inference processes undertaken are associated by RNN using fuzzy rules in the embedded POPFNN. Fuzzy IF-THEN rules provide better interpretability of the inference process of the hybrid networks. It allows an effective realisation of a data driven implication using RNN in the modelling of fuzzy entailment within a fuzzy neural networks (FNN) structure. FE-RNN obtains more consistent results than other FNN in the experiment using the Mackey-Glass dataset. FE-RNN achieves about 99% correlation for forecasting prices of market indexes. Its interpretability is also discussed. FE-RNN then acts as a prediction tool in a financial trading system using forecast-assisted technical indicators optimised with Genetic Algorithms. It outperforms the benchmark trading strategies in the trading experiments. Published version 2024-07-03T02:25:19Z 2024-07-03T02:25:19Z 2024 Journal Article Tan, J. C. M., Cao, Q. & Quek, C. (2024). FE-RNN: a fuzzy embedded recurrent neural network for improving interpretability of underlying neural network. Information Sciences, 663, 120276-. https://dx.doi.org/10.1016/j.ins.2024.120276 0020-0255 https://hdl.handle.net/10356/178712 10.1016/j.ins.2024.120276 2-s2.0-85184841583 663 120276 en Information Sciences © 2024 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science Data driven implication Deep neural networks |
spellingShingle |
Computer and Information Science Data driven implication Deep neural networks Tan, James Chee Min Cao, Qi Quek, Chai FE-RNN: a fuzzy embedded recurrent neural network for improving interpretability of underlying neural network |
description |
Deep learning enables effective predictions. But deep structures face some challenges on human interpretability compared to conventional techniques, e.g., fuzzy inference systems. It motivates more research works to alleviate the black box nature of deep structures with performance maintained. This paper proposes a fuzzy-embedded recurrent neural network (FE-RNN) to improve interpretability of the underlying neural networks. It is a parallel deep structure comprising an RNN and a Pseudo Outer-Product based Fuzzy Neural Network (POPFNN) that share a common set of input and output linguistic concepts. The inference processes undertaken are associated by RNN using fuzzy rules in the embedded POPFNN. Fuzzy IF-THEN rules provide better interpretability of the inference process of the hybrid networks. It allows an effective realisation of a data driven implication using RNN in the modelling of fuzzy entailment within a fuzzy neural networks (FNN) structure. FE-RNN obtains more consistent results than other FNN in the experiment using the Mackey-Glass dataset. FE-RNN achieves about 99% correlation for forecasting prices of market indexes. Its interpretability is also discussed. FE-RNN then acts as a prediction tool in a financial trading system using forecast-assisted technical indicators optimised with Genetic Algorithms. It outperforms the benchmark trading strategies in the trading experiments. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Tan, James Chee Min Cao, Qi Quek, Chai |
format |
Article |
author |
Tan, James Chee Min Cao, Qi Quek, Chai |
author_sort |
Tan, James Chee Min |
title |
FE-RNN: a fuzzy embedded recurrent neural network for improving interpretability of underlying neural network |
title_short |
FE-RNN: a fuzzy embedded recurrent neural network for improving interpretability of underlying neural network |
title_full |
FE-RNN: a fuzzy embedded recurrent neural network for improving interpretability of underlying neural network |
title_fullStr |
FE-RNN: a fuzzy embedded recurrent neural network for improving interpretability of underlying neural network |
title_full_unstemmed |
FE-RNN: a fuzzy embedded recurrent neural network for improving interpretability of underlying neural network |
title_sort |
fe-rnn: a fuzzy embedded recurrent neural network for improving interpretability of underlying neural network |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/178712 |
_version_ |
1806059832058314752 |