Overcoming catastrophic forgetting through replay in continual learning
Continual Learning (CL) allows artificial neural networks to learn a sequence of tasks without catastrophic forgetting of preceding tasks. In this dissertation, two separate works about CL are elaborated, one of which investigate the performance of CL on classification and the other focuses on regre...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/150091 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-150091 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1500912023-07-04T17:40:45Z Overcoming catastrophic forgetting through replay in continual learning Qiao, Zhongzheng Ponnuthurai Nagaratnam Suganthan School of Electrical and Electronic Engineering Institute for Infocomm Research, A*STAR EPNSugan@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Continual Learning (CL) allows artificial neural networks to learn a sequence of tasks without catastrophic forgetting of preceding tasks. In this dissertation, two separate works about CL are elaborated, one of which investigate the performance of CL on classification and the other focuses on regression problems. Two papers based on these works were submitted to the ICIP and IROS conference, respectively. For classification, a novel task-agnostic approach is proposed and compared with various state-of-the-art regularization and rehearsal CL algorithms in Task-IL scenario and Class-IL scenario. The task-agnostic approach implements all the strategies of regularization, replay and task-specific architectures, using a base-child hybrid setup. Multiple base classifiers guided by reference points learn new tasks and this information is distilled via Latent Space induced sampling strategy. A central child classifier consolidates information across tasks and infers the identifier automatically. Experimental results on standard data sets show that the proposed approach outperforms the other CL algorithms in Class-IL scenarios. Also, when task-ID is provided, the replay methods can generally achieve better performance in heterogeneous tasks, and it is more suitable to use regularization methods in homogeneous tasks. In the regression part, continual learning approaches are implemented to predict the speed and steering angle of vehicles, given the image sequence of the environment. Coreset 100% Sampling and EWC are used with a modified training loss. A novel metric DST is proposed to reflect the stability during the incremental learning. Experimental validation on a standard driving behavior dataset demonstrates the superior performance of CL algorithms compared to Sequential Fine-tuning for both regression outputs and even surpasses the performance of Joint Training on steering angle. Master of Science (Computer Control and Automation) 2021-06-08T11:54:47Z 2021-06-08T11:54:47Z 2021 Thesis-Master by Coursework Qiao, Z. (2021). Overcoming catastrophic forgetting through replay in continual learning. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/150091 https://hdl.handle.net/10356/150091 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Qiao, Zhongzheng Overcoming catastrophic forgetting through replay in continual learning |
description |
Continual Learning (CL) allows artificial neural networks to learn a sequence of tasks without catastrophic forgetting of preceding tasks. In this dissertation, two separate works about CL are elaborated, one of which investigate the performance of CL on classification and the other focuses on regression problems. Two papers based on these works were submitted to the ICIP and IROS conference, respectively.
For classification, a novel task-agnostic approach is proposed and compared with various state-of-the-art regularization and rehearsal CL algorithms in Task-IL scenario and Class-IL scenario. The task-agnostic approach implements all the strategies of regularization, replay and task-specific architectures, using a base-child hybrid setup. Multiple base classifiers guided by reference points learn new tasks and this information is distilled via Latent Space induced sampling strategy. A central child classifier consolidates information across tasks and infers the identifier automatically. Experimental results on standard data sets show that the proposed approach outperforms the other CL algorithms in Class-IL scenarios. Also, when task-ID is provided, the replay methods can generally achieve better performance in heterogeneous tasks, and it is more suitable to use regularization methods in homogeneous tasks.
In the regression part, continual learning approaches are implemented to predict the speed and steering angle of vehicles, given the image sequence of the environment. Coreset 100% Sampling and EWC are used with a modified training loss. A novel metric DST is proposed to reflect the stability during the incremental learning. Experimental validation on a standard driving behavior dataset demonstrates the superior performance of CL algorithms compared to Sequential Fine-tuning for both regression outputs and even surpasses the performance of Joint Training on steering angle. |
author2 |
Ponnuthurai Nagaratnam Suganthan |
author_facet |
Ponnuthurai Nagaratnam Suganthan Qiao, Zhongzheng |
format |
Thesis-Master by Coursework |
author |
Qiao, Zhongzheng |
author_sort |
Qiao, Zhongzheng |
title |
Overcoming catastrophic forgetting through replay in continual learning |
title_short |
Overcoming catastrophic forgetting through replay in continual learning |
title_full |
Overcoming catastrophic forgetting through replay in continual learning |
title_fullStr |
Overcoming catastrophic forgetting through replay in continual learning |
title_full_unstemmed |
Overcoming catastrophic forgetting through replay in continual learning |
title_sort |
overcoming catastrophic forgetting through replay in continual learning |
publisher |
Nanyang Technological University |
publishDate |
2021 |
url |
https://hdl.handle.net/10356/150091 |
_version_ |
1772825841563074560 |