Lifelong learning with Bayesian neural network
Continual learning aims to solve catastrophic forgetting during the learning process. When the model has limited capacity or one cannot access data from previous tasks, catastrophic forgetting could be especially challenging. Rehearsal-based continual learning method could be used to solve the probl...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/161441 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Continual learning aims to solve catastrophic forgetting during the learning process. When the model has limited capacity or one cannot access data from previous tasks, catastrophic forgetting could be especially challenging. Rehearsal-based continual learning method could be used to solve the problem. Most rehearsal-based continual learning algorithms need extra computation to select the replay samples. In contrast, we propose a Rehearsal method based on Continual Bayesian Neural Network (RCB), which we select the samples for replay based on the uncertainty produced by the output of the Bayesian Neural Network. We compared our approach with other state-of-art Continual Learning methods. We also give the explanation of why selecting different variance samples to replay will have distinct performance. Our method is flexible with all the datasets. We could use different strategies to pick samples to put in rehearsal. |
---|