Self-distillation for randomized neural networks

Knowledge distillation (KD) is a conventional method in the field of deep learning that enables the transfer of dark knowledge from a teacher model to a student model, consequently improving the performance of the student model. In randomized neural networks, due to the simple topology of network ar...

全面介紹

Saved in:
書目詳細資料
Main Authors: Hu, Minghui, Gao, Ruobin, Suganthan, Ponnuthurai Nagaratnam
其他作者: School of Electrical and Electronic Engineering
格式: Article
語言:English
出版: 2024
主題:
在線閱讀:https://hdl.handle.net/10356/174318
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English