Towards a smaller student: Capacity dynamic distillation for efficient image retrieval

Previous Knowledge Distillation based efficient image retrieval methods employ a lightweight network as the student model for fast inference. However, the lightweight student model lacks adequate representation capacity for effective knowledge imitation during the most critical early training period...

全面介紹

Saved in:
書目詳細資料
Main Authors: XIE, Yi, ZHANG, Huaidong, XU, Xuemiao, ZHU, Jianqing, HE, Shengfeng
格式: text
語言:English
出版: Institutional Knowledge at Singapore Management University 2023
主題:
在線閱讀:https://ink.library.smu.edu.sg/sis_research/8448
https://ink.library.smu.edu.sg/context/sis_research/article/9451/viewcontent/TowardsSmallerStudent_IR_av_cc_by.pdf
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Singapore Management University
語言: English
id sg-smu-ink.sis_research-9451
record_format dspace
spelling sg-smu-ink.sis_research-94512024-01-04T09:53:16Z Towards a smaller student: Capacity dynamic distillation for efficient image retrieval XIE, Yi ZHANG, Huaidong XU, Xuemiao ZHU, Jianqing HE, Shengfeng Previous Knowledge Distillation based efficient image retrieval methods employ a lightweight network as the student model for fast inference. However, the lightweight student model lacks adequate representation capacity for effective knowledge imitation during the most critical early training period, causing final performance degeneration. To tackle this issue, we propose a Capacity Dynamic Distillation framework, which constructs a student model with editable representation capacity. Specifically, the employed student model is initially a heavy model to fruitfully learn distilled knowledge in the early training epochs, and the student model is gradually compressed during the training. To dynamically adjust the model capacity, our dynamic frame-work inserts a learnable convolutional layer within each residual block in the student model as the channel importance indicator. The indicator is optimized simultaneously by the image retrieval loss and the compression loss, and a retrieval-guided gradient resetting mechanism is proposed to release the gradient conflict. Extensive experiments show that our method has superior inference speed and accuracy, e.g., on the VeRi-776 dataset, given the ResNet101 as a teacher, our method saves 67.13% model parameters and 65.67% FLOPs without sacrificing accuracy. Code is available at https://github.com/SCY-X/Capacity-Dynamic-Distillation. 2023-06-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8448 info:doi/10.1109/CVPR52729.2023.01536 https://ink.library.smu.edu.sg/context/sis_research/article/9451/viewcontent/TowardsSmallerStudent_IR_av_cc_by.pdf http://creativecommons.org/licenses/by/3.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Deep learning architectures and techniques Databases and Information Systems Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Deep learning architectures and techniques
Databases and Information Systems
Graphics and Human Computer Interfaces
spellingShingle Deep learning architectures and techniques
Databases and Information Systems
Graphics and Human Computer Interfaces
XIE, Yi
ZHANG, Huaidong
XU, Xuemiao
ZHU, Jianqing
HE, Shengfeng
Towards a smaller student: Capacity dynamic distillation for efficient image retrieval
description Previous Knowledge Distillation based efficient image retrieval methods employ a lightweight network as the student model for fast inference. However, the lightweight student model lacks adequate representation capacity for effective knowledge imitation during the most critical early training period, causing final performance degeneration. To tackle this issue, we propose a Capacity Dynamic Distillation framework, which constructs a student model with editable representation capacity. Specifically, the employed student model is initially a heavy model to fruitfully learn distilled knowledge in the early training epochs, and the student model is gradually compressed during the training. To dynamically adjust the model capacity, our dynamic frame-work inserts a learnable convolutional layer within each residual block in the student model as the channel importance indicator. The indicator is optimized simultaneously by the image retrieval loss and the compression loss, and a retrieval-guided gradient resetting mechanism is proposed to release the gradient conflict. Extensive experiments show that our method has superior inference speed and accuracy, e.g., on the VeRi-776 dataset, given the ResNet101 as a teacher, our method saves 67.13% model parameters and 65.67% FLOPs without sacrificing accuracy. Code is available at https://github.com/SCY-X/Capacity-Dynamic-Distillation.
format text
author XIE, Yi
ZHANG, Huaidong
XU, Xuemiao
ZHU, Jianqing
HE, Shengfeng
author_facet XIE, Yi
ZHANG, Huaidong
XU, Xuemiao
ZHU, Jianqing
HE, Shengfeng
author_sort XIE, Yi
title Towards a smaller student: Capacity dynamic distillation for efficient image retrieval
title_short Towards a smaller student: Capacity dynamic distillation for efficient image retrieval
title_full Towards a smaller student: Capacity dynamic distillation for efficient image retrieval
title_fullStr Towards a smaller student: Capacity dynamic distillation for efficient image retrieval
title_full_unstemmed Towards a smaller student: Capacity dynamic distillation for efficient image retrieval
title_sort towards a smaller student: capacity dynamic distillation for efficient image retrieval
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8448
https://ink.library.smu.edu.sg/context/sis_research/article/9451/viewcontent/TowardsSmallerStudent_IR_av_cc_by.pdf
_version_ 1787590751263129600