Multi-learner based recursive supervised training

In this paper, we propose the multi-learner based recursive supervised training (MLRT) algorithm, which uses the existing framework of recursive task decomposition, by training the entire dataset, picking out the best learnt patterns, and then repeating the process with the remaining patterns. Inste...

Full description

Saved in:
Bibliographic Details
Main Authors: IYER, Laxmi R., RAMANATHAN, Kiruthika, GUAN, Sheng-Uei
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2006
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9314
https://ink.library.smu.edu.sg/context/sis_research/article/10314/viewcontent/Multi_Learner_based_Recursive_Supervised_Training_av.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10314
record_format dspace
spelling sg-smu-ink.sis_research-103142024-09-26T08:06:23Z Multi-learner based recursive supervised training IYER, Laxmi R. RAMANATHAN, Kiruthika GUAN, Sheng-Uei In this paper, we propose the multi-learner based recursive supervised training (MLRT) algorithm, which uses the existing framework of recursive task decomposition, by training the entire dataset, picking out the best learnt patterns, and then repeating the process with the remaining patterns. Instead of having a single learner to classify all datasets during each recursion, an appropriate learner is chosen from a set of three learners, based on the subset of data being trained, thereby avoiding the time overhead associated with the genetic algorithm learner utilized in previous approaches. In this way MLRT seeks to identify the inherent characteristics of the dataset, and utilize it to train the data accurately and efficiently. We observed that empirically MLRT performs considerably well as compared with RPHP and other systems on benchmark data with 11% improvement in accuracy on the SPAM dataset and comparable performances on the VOWEL and the TWO-SPIRAL problems. In addition, for most datasets, the time taken by MLRT is considerably lower than that of the other systems with comparable accuracy. Two heuristic versions, MLRT-2 and MLRT-3, are also introduced to improve the efficiency in the system, and to make it more scalable for future updates. The performance in these versions is similar to the original MLRT system. 2006-09-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9314 info:doi/10.1142/S1469026806001861 https://ink.library.smu.edu.sg/context/sis_research/article/10314/viewcontent/Multi_Learner_based_Recursive_Supervised_Training_av.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Neural Networks Supervised Learning Probabilistic Neural Networks (PNN) Backpropagation Artificial Intelligence and Robotics Numerical Analysis and Scientific Computing
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Neural Networks
Supervised Learning
Probabilistic Neural Networks (PNN)
Backpropagation
Artificial Intelligence and Robotics
Numerical Analysis and Scientific Computing
spellingShingle Neural Networks
Supervised Learning
Probabilistic Neural Networks (PNN)
Backpropagation
Artificial Intelligence and Robotics
Numerical Analysis and Scientific Computing
IYER, Laxmi R.
RAMANATHAN, Kiruthika
GUAN, Sheng-Uei
Multi-learner based recursive supervised training
description In this paper, we propose the multi-learner based recursive supervised training (MLRT) algorithm, which uses the existing framework of recursive task decomposition, by training the entire dataset, picking out the best learnt patterns, and then repeating the process with the remaining patterns. Instead of having a single learner to classify all datasets during each recursion, an appropriate learner is chosen from a set of three learners, based on the subset of data being trained, thereby avoiding the time overhead associated with the genetic algorithm learner utilized in previous approaches. In this way MLRT seeks to identify the inherent characteristics of the dataset, and utilize it to train the data accurately and efficiently. We observed that empirically MLRT performs considerably well as compared with RPHP and other systems on benchmark data with 11% improvement in accuracy on the SPAM dataset and comparable performances on the VOWEL and the TWO-SPIRAL problems. In addition, for most datasets, the time taken by MLRT is considerably lower than that of the other systems with comparable accuracy. Two heuristic versions, MLRT-2 and MLRT-3, are also introduced to improve the efficiency in the system, and to make it more scalable for future updates. The performance in these versions is similar to the original MLRT system.
format text
author IYER, Laxmi R.
RAMANATHAN, Kiruthika
GUAN, Sheng-Uei
author_facet IYER, Laxmi R.
RAMANATHAN, Kiruthika
GUAN, Sheng-Uei
author_sort IYER, Laxmi R.
title Multi-learner based recursive supervised training
title_short Multi-learner based recursive supervised training
title_full Multi-learner based recursive supervised training
title_fullStr Multi-learner based recursive supervised training
title_full_unstemmed Multi-learner based recursive supervised training
title_sort multi-learner based recursive supervised training
publisher Institutional Knowledge at Singapore Management University
publishDate 2006
url https://ink.library.smu.edu.sg/sis_research/9314
https://ink.library.smu.edu.sg/context/sis_research/article/10314/viewcontent/Multi_Learner_based_Recursive_Supervised_Training_av.pdf
_version_ 1814047906057945088