Self-organizing cooperative neural network experts

Neural networks are generally considered as function approximation models that map a set of input features to their target outputs. Their approximation capability can be improved through “ensemble learning”. An ensemble of neural networks decreases the error correlation of the group by having each n...

Full description

Saved in:
Bibliographic Details
Main Author: Agarap, Abien Fred
Format: text
Language:English
Published: Animo Repository 2022
Subjects:
Online Access:https://animorepository.dlsu.edu.ph/etdm_comsci/16
https://animorepository.dlsu.edu.ph/cgi/viewcontent.cgi?article=1020&context=etdm_comsci
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: De La Salle University
Language: English
id oai:animorepository.dlsu.edu.ph:etdm_comsci-1020
record_format eprints
spelling oai:animorepository.dlsu.edu.ph:etdm_comsci-10202022-07-25T02:39:46Z Self-organizing cooperative neural network experts Agarap, Abien Fred Neural networks are generally considered as function approximation models that map a set of input features to their target outputs. Their approximation capability can be improved through “ensemble learning”. An ensemble of neural networks decreases the error correlation of the group by having each network in the ensemble compensate for the performance of one another. One ensembling technique is the Mixture-of-Experts model, which consists of a set of independently-trained expert neural networks that specialize on their own subset of the dataset, and a gating network that manages the specialization of the expert neural networks. In this model, all the neural networks are trained concurrently, but the expert neural networks are only trained on cases in which they perform well. Some major components of the proposed architecture for this thesis are the Cooperative Ensemble, which trains its neural networks concurrently instead of independently, and the k-Winners-Take-All activation function to drive the specialization among neural network experts on a subset of the input features. This way, there is no longer a need for a centralized gating network to manage the specialization of the neural network experts. We further improve upon the k-Winners-Take-All ensemble neural network by training another neural network with the designated task of learning useful feature representations for the neural networks in the ensemble. To learn such representations, the neural network uses the Soft Nearest Neighbor Loss which engenders a simpler function approximation task for the neural networks in the ensemble. We call the resulting full architecture “Self-Organizing Cooperative Neural Network Experts” (SOCONNE), in which a set of neural networks gain the right to specialize on their own subsets of the dataset without the use of a centralized gating neural network. Numerous experiments on a variety of test datasets show that the novel architecture (1) takes advantage of the learned representations for the set of input features by learning their underlying structure, and (2) uses these learned representations to simplify the task of the neural networks in a cooperative ensemble set-up. 2022-02-01T08:00:00Z text application/pdf https://animorepository.dlsu.edu.ph/etdm_comsci/16 https://animorepository.dlsu.edu.ph/cgi/viewcontent.cgi?article=1020&context=etdm_comsci Computer Science Master's Theses English Animo Repository Neural networks (Computer science) Computer Sciences
institution De La Salle University
building De La Salle University Library
continent Asia
country Philippines
Philippines
content_provider De La Salle University Library
collection DLSU Institutional Repository
language English
topic Neural networks (Computer science)
Computer Sciences
spellingShingle Neural networks (Computer science)
Computer Sciences
Agarap, Abien Fred
Self-organizing cooperative neural network experts
description Neural networks are generally considered as function approximation models that map a set of input features to their target outputs. Their approximation capability can be improved through “ensemble learning”. An ensemble of neural networks decreases the error correlation of the group by having each network in the ensemble compensate for the performance of one another. One ensembling technique is the Mixture-of-Experts model, which consists of a set of independently-trained expert neural networks that specialize on their own subset of the dataset, and a gating network that manages the specialization of the expert neural networks. In this model, all the neural networks are trained concurrently, but the expert neural networks are only trained on cases in which they perform well. Some major components of the proposed architecture for this thesis are the Cooperative Ensemble, which trains its neural networks concurrently instead of independently, and the k-Winners-Take-All activation function to drive the specialization among neural network experts on a subset of the input features. This way, there is no longer a need for a centralized gating network to manage the specialization of the neural network experts. We further improve upon the k-Winners-Take-All ensemble neural network by training another neural network with the designated task of learning useful feature representations for the neural networks in the ensemble. To learn such representations, the neural network uses the Soft Nearest Neighbor Loss which engenders a simpler function approximation task for the neural networks in the ensemble. We call the resulting full architecture “Self-Organizing Cooperative Neural Network Experts” (SOCONNE), in which a set of neural networks gain the right to specialize on their own subsets of the dataset without the use of a centralized gating neural network. Numerous experiments on a variety of test datasets show that the novel architecture (1) takes advantage of the learned representations for the set of input features by learning their underlying structure, and (2) uses these learned representations to simplify the task of the neural networks in a cooperative ensemble set-up.
format text
author Agarap, Abien Fred
author_facet Agarap, Abien Fred
author_sort Agarap, Abien Fred
title Self-organizing cooperative neural network experts
title_short Self-organizing cooperative neural network experts
title_full Self-organizing cooperative neural network experts
title_fullStr Self-organizing cooperative neural network experts
title_full_unstemmed Self-organizing cooperative neural network experts
title_sort self-organizing cooperative neural network experts
publisher Animo Repository
publishDate 2022
url https://animorepository.dlsu.edu.ph/etdm_comsci/16
https://animorepository.dlsu.edu.ph/cgi/viewcontent.cgi?article=1020&context=etdm_comsci
_version_ 1740844655898001408