An incremental construction of deep neuro fuzzy system for continual learning of nonstationary data streams
Existing fuzzy neural networks (FNNs) are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This article proposes a novel self-organizing deep FNN, namely deep evolving fuzzy neural network (DEVFNN). Fuzzy rules can be automatical...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/161032 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Existing fuzzy neural networks (FNNs) are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This article proposes a novel self-organizing deep FNN, namely deep evolving fuzzy neural network (DEVFNN). Fuzzy rules can be automatically extracted from data streams or removed if they play limited role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method, which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. The DEVFNN is developed under the stacked generalization principle via the feature augmentation concept, where a recently developed algorithm, namely generic classifier, drives the hidden layer. It is equipped by an automatic feature selection method, which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent the uncontrollable growth of dimensionality of input space due to the nature of the feature augmentation approach in building a deep network structure. The DEVFNN works in the samplewise fashion and is compatible for data stream applications. The efficacy of the DEVFNN has been thoroughly evaluated using seven datasets with nonstationary properties under the prequential test-then-train protocol. It has been compared with four popular continual learning algorithms and its shallow counterpart, where the DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept of the drift detection method is an effective tool to control the depth of the network structure, while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance. |
---|