Self-distillation for randomized neural networks
Knowledge distillation (KD) is a conventional method in the field of deep learning that enables the transfer of dark knowledge from a teacher model to a student model, consequently improving the performance of the student model. In randomized neural networks, due to the simple topology of network ar...
Saved in:
Main Authors: | Hu, Minghui, Gao, Ruobin, Suganthan, Ponnuthurai Nagaratnam |
---|---|
Other Authors: | School of Electrical and Electronic Engineering |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174318 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Online dynamic ensemble deep random vector functional link neural network for forecasting
by: Gao, Ruobin, et al.
Published: (2024) -
Representation learning using deep random vector functional link networks for clustering
by: Hu, Minghui, et al.
Published: (2022) -
An enhanced ensemble deep random vector functional link network for driver fatigue recognition
by: Li, Ruilin, et al.
Published: (2024) -
A spectral-ensemble deep random vector functional link network for passive brain–computer interface
by: Li, Ruilin, et al.
Published: (2024) -
Stacked autoencoder based deep random vector functional link neural network for classification
by: Katuwal, Rakesh, et al.
Published: (2020)