Unsupervised generative variational continual learning
Continual learning aims at learning a sequence of tasks without forgetting any task. There are mainly three categories in this field: replay methods, regularization-based methods, and parameter isolation methods. Recent research in continual learning generally incorporates two of these methods to ob...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/164770 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Continual learning aims at learning a sequence of tasks without forgetting any task. There are mainly three categories in this field: replay methods, regularization-based methods, and parameter isolation methods. Recent research in continual learning generally incorporates two of these methods to obtain better performance. This dissertation combined regularization-based methods and parameter isolation methods to ensure the important parameters for each task do not change drastically and free up unimportant parameters so the network is capable to learn new knowledge.
While most of the existing literature on continual learning is aimed at class incremental learning in a supervised setting, there is enormous potential for unsupervised continual learning using generative models. This dissertation proposes a combination of architectural pruning and network expansion in generative variational models toward unsupervised generative continual learning (UGCL). Evaluations on standard benchmark data sets demonstrate the superior generative ability of the proposed method. |
---|