Online and continual learning using randomization based deep neural networks
Deep neural networks have shown their promise in recent years with their state-of-the-art results. Yet, they suffer from some issues such as the time-consuming training process and catastrophic forgetting. In this work we look to overcome them by combining the advantages of an online learning pro...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Research |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/165774 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Deep neural networks have shown their promise in recent years with their state-of-the-art results.
Yet, they suffer from some issues such as the time-consuming training process and catastrophic
forgetting. In this work we look to overcome them by combining the advantages of an
online learning process as new data arrives and a system with fast and effective learning capability
such as the Random Vector Functional Link (RVFL) which is a Randomization based Deep
Neural Network. Our approach involves allowing the model to grow incrementally as new data
is made available so that it can more resemble real-life learning scenarios. Although RVFL
network was proposed as a single-hidden layer feedforward neural networks (SLFNs), deep
variants have been recently developed. As opposed to conventional neural networks adjusting
network weights iteratively, RVFL uses a simple learning method without iterative parameter
learning.
Keywords: RVFL, Online Learning, Continual Learning. |
---|