Self-distillation for randomized neural networks
Knowledge distillation (KD) is a conventional method in the field of deep learning that enables the transfer of dark knowledge from a teacher model to a student model, consequently improving the performance of the student model. In randomized neural networks, due to the simple topology of network ar...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174318 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-174318 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1743182024-03-29T15:40:05Z Self-distillation for randomized neural networks Hu, Minghui Gao, Ruobin Suganthan, Ponnuthurai Nagaratnam School of Electrical and Electronic Engineering School of Civil and Environmental Engineering Engineering Knowledge distillation Random vector functional link Knowledge distillation (KD) is a conventional method in the field of deep learning that enables the transfer of dark knowledge from a teacher model to a student model, consequently improving the performance of the student model. In randomized neural networks, due to the simple topology of network architecture and the insignificant relationship between model performance and model size, KD is not able to improve model performance. In this work, we propose a self-distillation pipeline for randomized neural networks: the predictions of the network itself are regarded as the additional target, which are mixed with the weighted original target as a distillation target containing dark knowledge to supervise the training of the model. All the predictions during multi-generation self-distillation process can be integrated by a multi-teacher method. By induction, we have additionally arrived at the methods for infinite self-distillation (ISD) of randomized neural networks. We then provide relevant theoretical analysis about the self-distillation method for randomized neural networks. Furthermore, we demonstrated the effectiveness of the proposed method in practical applications on several benchmark datasets. National Research Foundation (NRF) Published version This work was supported by the Open Access funding provided by the Qatar National Library. The work of Ruobin Gao was supported by the National Research Foundation, Singapore under its AI Singapore Program (AISG) under Award AISG2-TC-2021-001. 2024-03-26T04:34:57Z 2024-03-26T04:34:57Z 2023 Journal Article Hu, M., Gao, R. & Suganthan, P. N. (2023). Self-distillation for randomized neural networks. IEEE Transactions On Neural Networks and Learning Systems. https://dx.doi.org/10.1109/TNNLS.2023.3292063 2162-237X https://hdl.handle.net/10356/174318 10.1109/TNNLS.2023.3292063 37585327 2-s2.0-85168257179 en AISG2-TC-2021-001 IEEE Transactions on Neural Networks and Learning Systems © The Authors. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering Knowledge distillation Random vector functional link |
spellingShingle |
Engineering Knowledge distillation Random vector functional link Hu, Minghui Gao, Ruobin Suganthan, Ponnuthurai Nagaratnam Self-distillation for randomized neural networks |
description |
Knowledge distillation (KD) is a conventional method in the field of deep learning that enables the transfer of dark knowledge from a teacher model to a student model, consequently improving the performance of the student model. In randomized neural networks, due to the simple topology of network architecture and the insignificant relationship between model performance and model size, KD is not able to improve model performance. In this work, we propose a self-distillation pipeline for randomized neural networks: the predictions of the network itself are regarded as the additional target, which are mixed with the weighted original target as a distillation target containing dark knowledge to supervise the training of the model. All the predictions during multi-generation self-distillation process can be integrated by a multi-teacher method. By induction, we have additionally arrived at the methods for infinite self-distillation (ISD) of randomized neural networks. We then provide relevant theoretical analysis about the self-distillation method for randomized neural networks. Furthermore, we demonstrated the effectiveness of the proposed method in practical applications on several benchmark datasets. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Hu, Minghui Gao, Ruobin Suganthan, Ponnuthurai Nagaratnam |
format |
Article |
author |
Hu, Minghui Gao, Ruobin Suganthan, Ponnuthurai Nagaratnam |
author_sort |
Hu, Minghui |
title |
Self-distillation for randomized neural networks |
title_short |
Self-distillation for randomized neural networks |
title_full |
Self-distillation for randomized neural networks |
title_fullStr |
Self-distillation for randomized neural networks |
title_full_unstemmed |
Self-distillation for randomized neural networks |
title_sort |
self-distillation for randomized neural networks |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/174318 |
_version_ |
1795302098115493888 |