Pruning meta-trained networks for on-device adaptation

Adapting neural networks to unseen tasks with few training samples on resource-constrained devices benefits various Internet-of-Things applications. Such neural networks should learn the new tasks in few shots and be compact in size. Meta-learning enables few-shot learning, yet the meta-trained netw...

Full description

Saved in:
Bibliographic Details
Main Authors: GAO, Dawei, HE, Xiaoxi, ZHOU, Zimu, TONG, Yongxin, THIELE, Lothar
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6702
https://ink.library.smu.edu.sg/context/sis_research/article/7705/viewcontent/cikm21_gao.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-7705
record_format dspace
spelling sg-smu-ink.sis_research-77052022-04-19T02:26:32Z Pruning meta-trained networks for on-device adaptation GAO, Dawei HE, Xiaoxi ZHOU, Zimu TONG, Yongxin THIELE, Lothar Adapting neural networks to unseen tasks with few training samples on resource-constrained devices benefits various Internet-of-Things applications. Such neural networks should learn the new tasks in few shots and be compact in size. Meta-learning enables few-shot learning, yet the meta-trained networks can be overparameterised. However, naive combination of standard compression techniques like network pruning with meta-learning jeopardises the ability for fast adaptation. In this work, we propose adaptation-aware network pruning (ANP), a novel pruning scheme that works with existing meta-learning methods for a compact network capable of fast adaptation. ANP uses weight importance metric that is based on the sensitivity of the meta-objective rather than the conventional loss function, and adopts approximation of derivatives and layer-wise pruning techniques to reduce the overhead of computing the new importance metric. Evaluations on few-shot classification benchmarks show that ANP can prune meta-trained convolutional and residual networks by 85% without affecting their fast adaptation. 2021-11-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6702 info:doi/10.1145/3459637.3482378 https://ink.library.smu.edu.sg/context/sis_research/article/7705/viewcontent/cikm21_gao.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University deep neural networks meta learning network pruning OS and Networks Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic deep neural networks
meta learning
network pruning
OS and Networks
Software Engineering
spellingShingle deep neural networks
meta learning
network pruning
OS and Networks
Software Engineering
GAO, Dawei
HE, Xiaoxi
ZHOU, Zimu
TONG, Yongxin
THIELE, Lothar
Pruning meta-trained networks for on-device adaptation
description Adapting neural networks to unseen tasks with few training samples on resource-constrained devices benefits various Internet-of-Things applications. Such neural networks should learn the new tasks in few shots and be compact in size. Meta-learning enables few-shot learning, yet the meta-trained networks can be overparameterised. However, naive combination of standard compression techniques like network pruning with meta-learning jeopardises the ability for fast adaptation. In this work, we propose adaptation-aware network pruning (ANP), a novel pruning scheme that works with existing meta-learning methods for a compact network capable of fast adaptation. ANP uses weight importance metric that is based on the sensitivity of the meta-objective rather than the conventional loss function, and adopts approximation of derivatives and layer-wise pruning techniques to reduce the overhead of computing the new importance metric. Evaluations on few-shot classification benchmarks show that ANP can prune meta-trained convolutional and residual networks by 85% without affecting their fast adaptation.
format text
author GAO, Dawei
HE, Xiaoxi
ZHOU, Zimu
TONG, Yongxin
THIELE, Lothar
author_facet GAO, Dawei
HE, Xiaoxi
ZHOU, Zimu
TONG, Yongxin
THIELE, Lothar
author_sort GAO, Dawei
title Pruning meta-trained networks for on-device adaptation
title_short Pruning meta-trained networks for on-device adaptation
title_full Pruning meta-trained networks for on-device adaptation
title_fullStr Pruning meta-trained networks for on-device adaptation
title_full_unstemmed Pruning meta-trained networks for on-device adaptation
title_sort pruning meta-trained networks for on-device adaptation
publisher Institutional Knowledge at Singapore Management University
publishDate 2021
url https://ink.library.smu.edu.sg/sis_research/6702
https://ink.library.smu.edu.sg/context/sis_research/article/7705/viewcontent/cikm21_gao.pdf
_version_ 1770576050387943424