Variational inference based unsupervised continual learning

This research is aimed at investigating variational inference based deep learning approach for generative continual learning. Continual learning is aimed at learning a sequence of task, in scenarios where data from past tasks are unavailable. Thus, it emphasizes on learning a sequence of task, witho...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Gao, Zhaoqi
مؤلفون آخرون: Ponnuthurai Nagaratnam Suganthan
التنسيق: Thesis-Master by Coursework
اللغة:English
منشور في: Nanyang Technological University 2022
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/155840
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Nanyang Technological University
اللغة: English
الوصف
الملخص:This research is aimed at investigating variational inference based deep learning approach for generative continual learning. Continual learning is aimed at learning a sequence of task, in scenarios where data from past tasks are unavailable. Thus, it emphasizes on learning a sequence of task, without catastrophically forgetting any task. Bayesian deep neural networks enable inherent online variational inference and are suitable candidates for continual learning. Studies in the literature have shown the continual learning ability of Bayesian neural network in a sequence of classification task. In this thesis, we develop an unsupervised variational inference based continual learning algorithm with generative replay, using variational autoencoders. We demonstrate the performance of the proposed approach on split MNIST and split CIFAR10 data sets. Performances studies in comparison with state-of-the-art continual learning approaches show improved performance of the proposed approach with unsupervised generative replay. We also define some criteria to evaluate the performance of an unsupervised continual learning model.