Variational inference based unsupervised continual learning
This research is aimed at investigating variational inference based deep learning approach for generative continual learning. Continual learning is aimed at learning a sequence of task, in scenarios where data from past tasks are unavailable. Thus, it emphasizes on learning a sequence of task, witho...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/155840 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | This research is aimed at investigating variational inference based deep learning approach for generative continual learning. Continual learning is aimed at learning a sequence of task, in scenarios where data from past tasks are unavailable. Thus, it emphasizes on learning a sequence of task, without catastrophically forgetting any task. Bayesian deep neural networks enable inherent online variational inference and are suitable candidates for continual learning. Studies in the literature have shown the continual learning ability of Bayesian neural network in a sequence of classification task.
In this thesis, we develop an unsupervised variational inference based continual learning algorithm with generative replay, using variational autoencoders. We demonstrate the performance of the proposed approach on split MNIST and split CIFAR10 data sets. Performances studies in comparison with state-of-the-art continual learning approaches show improved performance of the proposed approach with unsupervised generative replay. We also define some criteria to evaluate the performance of an unsupervised continual learning model. |
---|