Self-distillation for randomized neural networks

Knowledge distillation (KD) is a conventional method in the field of deep learning that enables the transfer of dark knowledge from a teacher model to a student model, consequently improving the performance of the student model. In randomized neural networks, due to the simple topology of network ar...

Full description

Saved in:
Bibliographic Details
Main Authors: Hu, Minghui, Gao, Ruobin, Suganthan, Ponnuthurai Nagaratnam
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/174318
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English

Similar Items