Incremental learning technologies for semantic segmentation

Semantic segmentation models based on deep learning technologies have achieved remarkable results in recent years. However, many models encounter the problem of catastrophic forgetting, i.e. when the model is required to learn a new task without labels for old objects, its performance drops signific...

全面介紹

Saved in:
書目詳細資料
主要作者: Yang, Yizhuo
其他作者: Xie Lihua
格式: Thesis-Master by Coursework
語言:English
出版: Nanyang Technological University 2022
主題:
在線閱讀:https://hdl.handle.net/10356/157338
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Semantic segmentation models based on deep learning technologies have achieved remarkable results in recent years. However, many models encounter the problem of catastrophic forgetting, i.e. when the model is required to learn a new task without labels for old objects, its performance drops significantly on the previous tasks. This property greatly limits the application of the semantic segmentation models to the practical world. To solve this problem, an incremental learning method: Combination of Old Prediction and Modified Label (COPML) is developed in this dissertation project. The proposed method utilizes the prediction results of the old model and the modified labels of the new task to create pseudo labels which are close to the ground truth. By using these pseudo labels for training, the model is expected to preserve the knowledge of old tasks. In addition, other incremental learning technologies - knowledge distillation, replay and parameter freezing are also applied to the proposed method to further assist the model in overcoming catastrophic forgetting. The effectiveness of the proposed method is validated on two semantic segmentation models: Unet and Deeplab3 in Pascal-VOC 2012 dataset and a self-made dataset which contains images taken in NTU and its surroundings. The experimental results demonstrate that COPML enables the model to maintain most of the old knowledge while obtaining an excellent performance on a new task.