Training-free neural active learning with initialization robustness guarantees
Neural active learning techniques so far have focused on enhancing the predic- tive capabilities of the networks. However, safety-critical applications necessi- tate not only good predictive performance but also robustness to randomness in the model-fitting process. To address this, we present th...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/166498 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-166498 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1664982023-05-08T15:38:36Z Training-free neural active learning with initialization robustness guarantees Singh, Jasraj Tong Ping School of Physical and Mathematical Sciences Bryan Kian Hsiang Low tongping@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Neural active learning techniques so far have focused on enhancing the predic- tive capabilities of the networks. However, safety-critical applications necessi- tate not only good predictive performance but also robustness to randomness in the model-fitting process. To address this, we present the Expected Variance with Gaussian Processes (EV-GP) criterion for neural active learning, which is theoretically guaranteed to choose data points that result in neural networks exhibiting both (a) good generalization capabilities and (b) robustness to initial- ization. Notably, our EV-GP criterion is training-free, i.e., it does not require network training during data selection, making it computationally efficient. We empirically prove that our EV-GP criterion strongly correlates with initialization robustness and generalization performance. Additionally, we demonstrate that it consistently surpasses baseline methods in achieving both objectives, particularly in cases with limited initially labeled data or large batch sizes for active learning. Bachelor of Science in Mathematical and Computer Sciences 2023-05-02T04:59:46Z 2023-05-02T04:59:46Z 2023 Final Year Project (FYP) Singh, J. (2023). Training-free neural active learning with initialization robustness guarantees. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/166498 https://hdl.handle.net/10356/166498 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Singh, Jasraj Training-free neural active learning with initialization robustness guarantees |
description |
Neural active learning techniques so far have focused on enhancing the predic-
tive capabilities of the networks. However, safety-critical applications necessi-
tate not only good predictive performance but also robustness to randomness
in the model-fitting process. To address this, we present the Expected Variance
with Gaussian Processes (EV-GP) criterion for neural active learning, which is
theoretically guaranteed to choose data points that result in neural networks
exhibiting both (a) good generalization capabilities and (b) robustness to initial-
ization. Notably, our EV-GP criterion is training-free, i.e., it does not require
network training during data selection, making it computationally efficient. We
empirically prove that our EV-GP criterion strongly correlates with initialization
robustness and generalization performance. Additionally, we demonstrate that it
consistently surpasses baseline methods in achieving both objectives, particularly
in cases with limited initially labeled data or large batch sizes for active learning. |
author2 |
Tong Ping |
author_facet |
Tong Ping Singh, Jasraj |
format |
Final Year Project |
author |
Singh, Jasraj |
author_sort |
Singh, Jasraj |
title |
Training-free neural active learning with initialization robustness guarantees |
title_short |
Training-free neural active learning with initialization robustness guarantees |
title_full |
Training-free neural active learning with initialization robustness guarantees |
title_fullStr |
Training-free neural active learning with initialization robustness guarantees |
title_full_unstemmed |
Training-free neural active learning with initialization robustness guarantees |
title_sort |
training-free neural active learning with initialization robustness guarantees |
publisher |
Nanyang Technological University |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/166498 |
_version_ |
1770563988243873792 |