Self-reorganizing TSK fuzzy inference system with BCM theory of meta-plasticity
The usage of online learning technique in neuro-fuzzy system (NFS) to address system variance is more prevalent in recent times. Since a lot of external factors have an effect on time-variant datasets, these datasets tend to experience changes in their pattern. While small changes (“drifts”) can be...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2013
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/98222 http://hdl.handle.net/10220/12416 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | The usage of online learning technique in neuro-fuzzy system (NFS) to address system variance is more prevalent in recent times. Since a lot of external factors have an effect on time-variant datasets, these datasets tend to experience changes in their pattern. While small changes (“drifts”) can be handled by the traditional self-organizing techniques, major changes (“shifts”) are not handled. Thus, there is a growing need for these systems to be able to self-reorganize their structures to adapt to major changes in data patterns. Hebb's theory for learning in NFSs, proposed that synaptic strengths could be determined by a simple linear relation of the pre- and post-synaptic signals. However this theory resulted in a unidirectional growth of synaptic strengths and destabilized the model. The Bienenstock-Cooper-Munro (BCM) theory of learning resolves these problems by incorporating synaptic potentiation (association or Hebbian) and depression (dissociation or anti-Hebbian), which is useful for time-variant data computations. There are two popular methods for fuzzy rule representation, namely: Mamdani and Takagi-Sugeno-Kang (TSK) model. Mamdani model focuses on interpretability and compensates on accuracy. Rules are created by associating an input fuzzy region to an output fuzzy region. However, the TSK model associates an input fuzzy region to a linear function/plane making it more accurate than the Mamdani model. Current TSK models like SAFIS, eTS, and DENFIS attempt to strike a balance between the accuracy and interpretability of the model. However, most of the models utilize offline learning algorithms and require multiple passes of the data samples. Furthermore, the models that use online learning mainly employ Hebb's theory of incremental learning. This paper proposes a neuro-fuzzy architecture that uses the BCM theory of online learning with extensive self-reorganizing capabilities. It also uses a first-order TSK model for know- edge representation, which allows for an accurate output calculation. |
---|