Intelligent commodities trading system

The usage of online learning techniques in neuro-fuzzy systems to address system variance is more prevalent in recent times. Since a lot of external factors have an effect on time-variant datasets, they experience changes in their pattern. While small changes (“drifts”) can be handled by the traditi...

Full description

Saved in:
Bibliographic Details
Main Author: Joseph Jacob, Biju
Other Authors: Quek Hiok Chai
Format: Final Year Project
Language:English
Published: 2011
Subjects:
Online Access:http://hdl.handle.net/10356/46493
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The usage of online learning techniques in neuro-fuzzy systems to address system variance is more prevalent in recent times. Since a lot of external factors have an effect on time-variant datasets, they experience changes in their pattern. While small changes (“drifts”) can be handled by the traditional self-organizing techniques, major changes (“shifts”) are require the systems to have self-reorganize abilities. Hebb’s theory for learning proposed that synaptic strengths are determined by a simple linear relation of the pre and post-synaptic signals. This theory resulted in a uni-directional growth of synaptic strengths, which caused the model to become unstable. The BCM theory of learning resolved these problems by incorporating synaptic potentiation (association or Hebbian) and depression (dissociation or anti-Hebbian), which was useful for time-variant data computations. Rules are represented using the Mamdani model, which focuses on interpretability. Rules are created by associating an input membership label to an output membership label. However, the Takagi Sugeno Kang model associates an input fuzzy region to a linear function. The tuning of the function’s parameters are data driven, making it more accurate than the Mamdani model. Current TSK neuro-fuzzy systems like SAFIS, eTS, DENFIS, etc. are implemented attempt to strike a balance between the accuracy and interpretability of the model. However, most of them utilize offline learning algorithms and therefore read the input data multiple times. Furthermore, the models that use online learning mainly employ Hebb’s theory of incremental learning. This report proposes a neuro-fuzzy architecture that uses the BCM theory of online learning with extensive self-reorganizing capabilities. It also uses a first order TSK model for knowledge representation, which allows for an accurate output calculation.