Distillation and self-training in lane detection
Techniques such as knowledge distillation and selftraining have seen much research in recent years. These techniques are generalisable and provide performance improvements when applied on most models. Distillation allows a student network, usually with a smaller capacity, to perform similarly to the...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/144600 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-144600 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1446002020-11-16T01:22:45Z Distillation and self-training in lane detection Ngo, Jia Wei Chen Change Loy School of Computer Science and Engineering ccloy@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Techniques such as knowledge distillation and selftraining have seen much research in recent years. These techniques are generalisable and provide performance improvements when applied on most models. Distillation allows a student network, usually with a smaller capacity, to perform similarly to their larger teacher networks, while retaining its lightweight and fast properties. Self-training allows us to utilize unlabeled images at scale to improve our network’s performance. Existing research has seen experimentation mainly on classification tasks, with some recent papers exploring distillation and self-training in the semantic segmentation domain, but to the best of our knowledge, never simultaneously. In this paper, we set out to explore the performance gains that can be achieved from these techniques in the domain of lane detection for selfdriving cars. Our results show that Knowledge Distillation with dark knowledge from an ensemble of same architecture models will be able to provide similar performance gains as with ensembling techniques, while retaining its low evaluation time compared to ensembling techniques (an important factor for lane detection in self-driving cars). Preliminary results from self-training, which has seen positive results when used in conjunction with pre-training, shows we may be able to provide additional performance gains on top of ensemble distillation for lane detection with large amounts of unlabeled data. Bachelor of Engineering (Computer Science) 2020-11-16T01:22:44Z 2020-11-16T01:22:44Z 2020 Final Year Project (FYP) https://hdl.handle.net/10356/144600 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Ngo, Jia Wei Distillation and self-training in lane detection |
description |
Techniques such as knowledge distillation and selftraining have seen much research in recent years. These techniques are generalisable and provide performance improvements when applied on most models. Distillation allows a student network, usually with a smaller capacity, to perform similarly to their larger teacher networks, while retaining its lightweight and fast properties. Self-training allows us to utilize unlabeled images at scale to improve our network’s performance. Existing research has seen experimentation mainly on classification tasks, with some recent papers exploring distillation and self-training in the semantic segmentation domain, but to the best of our knowledge, never simultaneously. In this paper, we set out to explore the performance gains that can be achieved from these techniques in the domain of lane detection for selfdriving cars. Our results show that Knowledge Distillation with dark knowledge from an ensemble of same architecture models will be able to provide similar performance gains as with ensembling techniques, while retaining its low evaluation time compared to ensembling techniques (an important factor for lane detection in self-driving cars). Preliminary results from self-training, which has seen positive results when used in conjunction with pre-training, shows we may be able to provide additional performance gains on top of ensemble distillation for lane detection with large amounts of unlabeled data. |
author2 |
Chen Change Loy |
author_facet |
Chen Change Loy Ngo, Jia Wei |
format |
Final Year Project |
author |
Ngo, Jia Wei |
author_sort |
Ngo, Jia Wei |
title |
Distillation and self-training in lane detection |
title_short |
Distillation and self-training in lane detection |
title_full |
Distillation and self-training in lane detection |
title_fullStr |
Distillation and self-training in lane detection |
title_full_unstemmed |
Distillation and self-training in lane detection |
title_sort |
distillation and self-training in lane detection |
publisher |
Nanyang Technological University |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/144600 |
_version_ |
1688665322443243520 |