Multi-learner based recursive supervised training

In this paper, we propose the multi-learner based recursive supervised training (MLRT) algorithm, which uses the existing framework of recursive task decomposition, by training the entire dataset, picking out the best learnt patterns, and then repeating the process with the remaining patterns. Inste...

全面介紹

Saved in:
書目詳細資料
Main Authors: IYER, Laxmi R., RAMANATHAN, Kiruthika, GUAN, Sheng-Uei
格式: text
語言:English
出版: Institutional Knowledge at Singapore Management University 2006
主題:
在線閱讀:https://ink.library.smu.edu.sg/sis_research/7396
https://ink.library.smu.edu.sg/context/sis_research/article/8399/viewcontent/Multi_Learner_based_Recursive_Supervised_Training__1_.pdf
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Singapore Management University
語言: English
id sg-smu-ink.sis_research-8399
record_format dspace
spelling sg-smu-ink.sis_research-83992022-10-13T07:18:22Z Multi-learner based recursive supervised training IYER, Laxmi R. RAMANATHAN, Kiruthika GUAN, Sheng-Uei In this paper, we propose the multi-learner based recursive supervised training (MLRT) algorithm, which uses the existing framework of recursive task decomposition, by training the entire dataset, picking out the best learnt patterns, and then repeating the process with the remaining patterns. Instead of having a single learner to classify all datasets during each recursion, an appropriate learner is chosen from a set of three learners, based on the subset of data being trained, thereby avoiding the time overhead associated with the genetic algorithm learner utilized in previous approaches. In this way MLRT seeks to identify the inherent characteristics of the dataset, and utilize it to train the data accurately and efficiently. We observed that empirically MLRT performs considerably well as compared with RPHP and other systems on benchmark data with 11% improvement in accuracy on the SPAM dataset and comparable performances on the VOWEL and the TWO-SPIRAL problems. In addition, for most datasets, the time taken by MLRT is considerably lower than that of the other systems with comparable accuracy. Two heuristic versions, MLRT-2 and MLRT-3, are also introduced to improve the efficiency in the system, and to make it more scalable for future updates. The performance in these versions is similar to the original MLRT system. 2006-09-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7396 info:doi/10.1142/S1469026806001861 https://ink.library.smu.edu.sg/context/sis_research/article/8399/viewcontent/Multi_Learner_based_Recursive_Supervised_Training__1_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Neural Networks Supervised Learning Probabilistic Neural Networks (PNN) Backpropagation Databases and Information Systems OS and Networks
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Neural Networks
Supervised Learning
Probabilistic Neural Networks (PNN)
Backpropagation
Databases and Information Systems
OS and Networks
spellingShingle Neural Networks
Supervised Learning
Probabilistic Neural Networks (PNN)
Backpropagation
Databases and Information Systems
OS and Networks
IYER, Laxmi R.
RAMANATHAN, Kiruthika
GUAN, Sheng-Uei
Multi-learner based recursive supervised training
description In this paper, we propose the multi-learner based recursive supervised training (MLRT) algorithm, which uses the existing framework of recursive task decomposition, by training the entire dataset, picking out the best learnt patterns, and then repeating the process with the remaining patterns. Instead of having a single learner to classify all datasets during each recursion, an appropriate learner is chosen from a set of three learners, based on the subset of data being trained, thereby avoiding the time overhead associated with the genetic algorithm learner utilized in previous approaches. In this way MLRT seeks to identify the inherent characteristics of the dataset, and utilize it to train the data accurately and efficiently. We observed that empirically MLRT performs considerably well as compared with RPHP and other systems on benchmark data with 11% improvement in accuracy on the SPAM dataset and comparable performances on the VOWEL and the TWO-SPIRAL problems. In addition, for most datasets, the time taken by MLRT is considerably lower than that of the other systems with comparable accuracy. Two heuristic versions, MLRT-2 and MLRT-3, are also introduced to improve the efficiency in the system, and to make it more scalable for future updates. The performance in these versions is similar to the original MLRT system.
format text
author IYER, Laxmi R.
RAMANATHAN, Kiruthika
GUAN, Sheng-Uei
author_facet IYER, Laxmi R.
RAMANATHAN, Kiruthika
GUAN, Sheng-Uei
author_sort IYER, Laxmi R.
title Multi-learner based recursive supervised training
title_short Multi-learner based recursive supervised training
title_full Multi-learner based recursive supervised training
title_fullStr Multi-learner based recursive supervised training
title_full_unstemmed Multi-learner based recursive supervised training
title_sort multi-learner based recursive supervised training
publisher Institutional Knowledge at Singapore Management University
publishDate 2006
url https://ink.library.smu.edu.sg/sis_research/7396
https://ink.library.smu.edu.sg/context/sis_research/article/8399/viewcontent/Multi_Learner_based_Recursive_Supervised_Training__1_.pdf
_version_ 1770576330968006656