An embedded neuro-fuzzy architecture for explainable time series analysis
Explainability in Artificial Intelligence (AI) refers to the knowledge and understanding of the internal representation in a machine learning model and of how it will affect the performance of that model. In applications such as financial prediction, medical diagnosis, and detection of manufacturing...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/155536 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-155536 |
---|---|
record_format |
dspace |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Xie, Chen An embedded neuro-fuzzy architecture for explainable time series analysis |
description |
Explainability in Artificial Intelligence (AI) refers to the knowledge and understanding of the internal representation in a machine learning model and of how it will affect the performance of that model. In applications such as financial prediction, medical diagnosis, and detection of manufacturing defects, it is desirable to find out which features contribute to a particular decision and in what manner. Explainability in AI is generally achieved in two ways: 1) Visualization of the learnt features, 2) Explanation of the learning in linguistic form. Visualization of the learnt features during training either displays these features using heat maps or showing segments of images focused by the algorithms. However, visualization of these features only shows what the algorithms are looking at, which does not really explain how the algorithms arrive at those decisions. This thesis presents three Mamdani-type neuro-fuzzy networks, namely, NFHW, FNHW, and EcFNN, to address the explanation of the learning using linguistic terms from the second aspect.
Neuro-fuzzy architectures have the ability to learn and reason using fuzzy linguistic rules. In order to achieve accurate predictions, these algorithms require sufficient training data that can fully describe the system behaviors, from which the linguistic rules are formed through implications and reasonings. The former is a realisation of entailment of data while the latter is the inference determined by the implication and the observation. In real-world applications, the majority of data that are available for training are very often derived from the steady-state, while the test data could include the transient scenarios that potentially cause poor prediction performances. This could happen in detecting manufacturing defects of aircraft engines, leaking or exploding incidents from a nuclear chemical plant, or financial crises that lead to tremendous losses.
In this thesis, a Mamdani-type neuro-fuzzy architecture that embeds deep learning algorithms as its implication operators is proposed for time series analysis. It directly addresses the explainability challenges in data science using the induced fuzzy linguistic rules to explain the implications of the embedded deep learning models in semantic terms. In addition, new fuzzy implication operators are exploited using deep learning models to address the transient behaviors that do not present during training. It imparts the conventional neuro-fuzzy systems with new embedded implication strategies such as using the novel Hammerstein-Weiner model or a deep learning model to achieve both reasoning and precision.
In order to address the prediction issue of the conventional neuro-fuzzy systems applied on dynamically changed data, a Deep Hybrid Fuzzy Neural Hammerstein-Wiener Network is proposed to exploit parallel computations, where the steady-state and dynamically changing data are handled separately by a neuro-fuzzy system and the Hammerstein-Wiener model to improve the prediction accuracy. A multilayer perceptron is employed as the control unit to decide the contributions of the neuro-fuzzy system and the Hammerstein-Wiener model to the final predictions. This network provides a solution for hybrid neuro-fuzzy systems to make predictions on the test data that has huge differences compared to training data.
With the aim of implementing the neuro-fuzzy system and the Hammerstein-Wiener model as an indivisible network instead of parallel models, the Interpretable Neural Fuzzy Hammerstein-Wiener Network is proposed to further modify the block-oriented architecture of the Hammerstein-Wiener model. The input and output nonlinearities of the Hammerstein-Wiener model are realized by the fuzzification and defuzzification processes of the neuro-fuzzy system. While the linear dynamic block is retained as the implication operator in the neuro-fuzzy system. In addition, a novel approach of the knowledge encoding and decoding processes is embedded in this network to facilitate the fuzzy representations for linear dynamic computations. The fuzzy linguistic rules, induced from the linear dynamic computation, are used to explain the inference processes, which allow the gray-box Hammerstein-Wiener model to be explainable.
In order to further enhance the learning of the classes of changes in the fuzzy space, the Hybrid Embedded Deep Fuzzy Association Model for Learning and Explanation embeds the deep learning model as a fuzzy implication operator of a five-layer Mamdani-type neuro-fuzzy system, which replaces the linear dynamic block implication mechanism in the Interpretable Neural Fuzzy Hammerstein-Wiener Network. Embedding a deep learning model in the neuro-fuzzy system allows the data-driven learning of fuzzy implication, which provides a close correspondence to the real-world entailment of data. In addition, employing neuro-fuzzy architecture in the deep learning model induces the fuzzy association rules that impart transparency to the deep learning structure, which are amenable to human interpretability. This work provides a new promising solution for a data-driven entailment of real-world applications by realizing the fuzzy implication using deep learning models.
The proposed networks are evaluated using time-series applications, such as financial stock price predictions and the forecasting of pH values in chemical plants. |
author2 |
Deepu Rajan |
author_facet |
Deepu Rajan Xie, Chen |
format |
Thesis-Doctor of Philosophy |
author |
Xie, Chen |
author_sort |
Xie, Chen |
title |
An embedded neuro-fuzzy architecture for explainable time series analysis |
title_short |
An embedded neuro-fuzzy architecture for explainable time series analysis |
title_full |
An embedded neuro-fuzzy architecture for explainable time series analysis |
title_fullStr |
An embedded neuro-fuzzy architecture for explainable time series analysis |
title_full_unstemmed |
An embedded neuro-fuzzy architecture for explainable time series analysis |
title_sort |
embedded neuro-fuzzy architecture for explainable time series analysis |
publisher |
Nanyang Technological University |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/155536 |
_version_ |
1729789472381337600 |
spelling |
sg-ntu-dr.10356-1555362022-04-04T03:16:52Z An embedded neuro-fuzzy architecture for explainable time series analysis Xie, Chen Deepu Rajan School of Computer Science and Engineering ASDRajan@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Explainability in Artificial Intelligence (AI) refers to the knowledge and understanding of the internal representation in a machine learning model and of how it will affect the performance of that model. In applications such as financial prediction, medical diagnosis, and detection of manufacturing defects, it is desirable to find out which features contribute to a particular decision and in what manner. Explainability in AI is generally achieved in two ways: 1) Visualization of the learnt features, 2) Explanation of the learning in linguistic form. Visualization of the learnt features during training either displays these features using heat maps or showing segments of images focused by the algorithms. However, visualization of these features only shows what the algorithms are looking at, which does not really explain how the algorithms arrive at those decisions. This thesis presents three Mamdani-type neuro-fuzzy networks, namely, NFHW, FNHW, and EcFNN, to address the explanation of the learning using linguistic terms from the second aspect. Neuro-fuzzy architectures have the ability to learn and reason using fuzzy linguistic rules. In order to achieve accurate predictions, these algorithms require sufficient training data that can fully describe the system behaviors, from which the linguistic rules are formed through implications and reasonings. The former is a realisation of entailment of data while the latter is the inference determined by the implication and the observation. In real-world applications, the majority of data that are available for training are very often derived from the steady-state, while the test data could include the transient scenarios that potentially cause poor prediction performances. This could happen in detecting manufacturing defects of aircraft engines, leaking or exploding incidents from a nuclear chemical plant, or financial crises that lead to tremendous losses. In this thesis, a Mamdani-type neuro-fuzzy architecture that embeds deep learning algorithms as its implication operators is proposed for time series analysis. It directly addresses the explainability challenges in data science using the induced fuzzy linguistic rules to explain the implications of the embedded deep learning models in semantic terms. In addition, new fuzzy implication operators are exploited using deep learning models to address the transient behaviors that do not present during training. It imparts the conventional neuro-fuzzy systems with new embedded implication strategies such as using the novel Hammerstein-Weiner model or a deep learning model to achieve both reasoning and precision. In order to address the prediction issue of the conventional neuro-fuzzy systems applied on dynamically changed data, a Deep Hybrid Fuzzy Neural Hammerstein-Wiener Network is proposed to exploit parallel computations, where the steady-state and dynamically changing data are handled separately by a neuro-fuzzy system and the Hammerstein-Wiener model to improve the prediction accuracy. A multilayer perceptron is employed as the control unit to decide the contributions of the neuro-fuzzy system and the Hammerstein-Wiener model to the final predictions. This network provides a solution for hybrid neuro-fuzzy systems to make predictions on the test data that has huge differences compared to training data. With the aim of implementing the neuro-fuzzy system and the Hammerstein-Wiener model as an indivisible network instead of parallel models, the Interpretable Neural Fuzzy Hammerstein-Wiener Network is proposed to further modify the block-oriented architecture of the Hammerstein-Wiener model. The input and output nonlinearities of the Hammerstein-Wiener model are realized by the fuzzification and defuzzification processes of the neuro-fuzzy system. While the linear dynamic block is retained as the implication operator in the neuro-fuzzy system. In addition, a novel approach of the knowledge encoding and decoding processes is embedded in this network to facilitate the fuzzy representations for linear dynamic computations. The fuzzy linguistic rules, induced from the linear dynamic computation, are used to explain the inference processes, which allow the gray-box Hammerstein-Wiener model to be explainable. In order to further enhance the learning of the classes of changes in the fuzzy space, the Hybrid Embedded Deep Fuzzy Association Model for Learning and Explanation embeds the deep learning model as a fuzzy implication operator of a five-layer Mamdani-type neuro-fuzzy system, which replaces the linear dynamic block implication mechanism in the Interpretable Neural Fuzzy Hammerstein-Wiener Network. Embedding a deep learning model in the neuro-fuzzy system allows the data-driven learning of fuzzy implication, which provides a close correspondence to the real-world entailment of data. In addition, employing neuro-fuzzy architecture in the deep learning model induces the fuzzy association rules that impart transparency to the deep learning structure, which are amenable to human interpretability. This work provides a new promising solution for a data-driven entailment of real-world applications by realizing the fuzzy implication using deep learning models. The proposed networks are evaluated using time-series applications, such as financial stock price predictions and the forecasting of pH values in chemical plants. Doctor of Philosophy 2022-03-03T00:24:52Z 2022-03-03T00:24:52Z 2021 Thesis-Doctor of Philosophy Xie, C. (2021). An embedded neuro-fuzzy architecture for explainable time series analysis. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/155536 https://hdl.handle.net/10356/155536 10.32657/10356/155536 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University |