Computation-efficient knowledge distillation via uncertainty-aware mixup
Knowledge distillation (KD) has emerged as an essential technique not only for model compression, but also other learning tasks such as continual learning. Given the richer application spectrum and potential online usage of KD, knowledge distillation efficiency becomes a pivotal component. In this w...
Saved in:
Main Authors: | Xu, Guodong, Liu, Ziwei, Loy, Chen Change |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/172038 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Edge-computing-based knowledge distillation and multitask learning for partial discharge recognition
by: Ji, Jinsheng, et al.
Published: (2024) -
Discriminator-enhanced knowledge-distillation networks
by: Li, Zhenping, et al.
Published: (2023) -
FedTKD: a trustworthy heterogeneous federated learning based on adaptive knowledge distillation
by: Chen, Leiming, et al.
Published: (2024) -
Inter-region affinity distillation for road marking segmentation
by: Hou, Yuenan, et al.
Published: (2022) -
On-the-fly knowledge distillation model for sentence embedding
by: Zhu, Xuchun
Published: (2024)