A projection based learning in Meta-cognitive Radial Basis Function Network for classification problems

In this paper, we propose a `Meta-cognitive Radial Basis Function Network (McRBFN)' and its `Projection Based Learning (PBL)' algorithm for classification problems. McRBFN emulates human-like meta-cognitive learning principles. As each sample is presented to the network, McRBFN uses the es...

Full description

Saved in:
Bibliographic Details
Main Authors: Sateesh Babu, Giduthuri, Suresh, Sundaram, Savitha, R.
Other Authors: School of Computer Engineering
Format: Conference or Workshop Item
Language:English
Published: 2013
Subjects:
Online Access:https://hdl.handle.net/10356/98312
http://hdl.handle.net/10220/12386
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:In this paper, we propose a `Meta-cognitive Radial Basis Function Network (McRBFN)' and its `Projection Based Learning (PBL)' algorithm for classification problems. McRBFN emulates human-like meta-cognitive learning principles. As each sample is presented to the network, McRBFN uses the estimated class label, the maximum hinge error and class-wise significance to address the self-regulating principles of what-to-learn, when-to-learn and how-to-learn in a meta-cognitive framework. McRBFN addresses the what-to-learn by choosing samples to participate in the learning process, also deleting samples with information similar to that already learnt by the network. A few samples that satisfy neither of these criteria are pushed to the rear end of the training data stack to be used in future, thereby satisfying the when-to-learn. The how-to-learn component of meta-cognition is addressed by using the participating samples to either add a neuron or update the output weights. Initially, McRBFN begins with zero hidden neurons and adds required number of neurons to approximate the decision surface. When a neuron is added, its parameters are initialized based on the sample overlapping conditions. The output weights are updated using a PBL algorithm such that the network finds the minimum point of an energy function defined by the hinge-loss error. The use of human meta-cognitive principles ensures efficient learning. Moreover, as samples with similar information are deleted, overtraining is avoided. The PBL algorithm helps to reduce the computational effort used in training. The performance of the PBL-McRBFN classifier is evaluated using a set of benchmark classification problems from the UCI machine learning repository. The performance evaluation study on these problems clearly indicates the superior performance of PBL-McRBFN classifier over results reported in the literature.