Pruning-aware merging for efficient multitask inference
Many mobile applications demand selective execution of multiple correlated deep learning inference tasks on resource-constrained platforms. Given a set of deep neural networks, each pre-trained for a single task, it is desired that executing arbitrary combinations of tasks yields minimal computation...
Saved in:
Main Authors: | GAO, Dawei, HE, Xiaoxi, ZHOU, Zimu, TONG, Yongxin, THIELE, Lothar |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6804 https://ink.library.smu.edu.sg/context/sis_research/article/7807/viewcontent/kdd21_he.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Stitching weight-shared deep neural networks for efficient multitask inference on GPU
by: WANG, Zeyu, et al.
Published: (2022) -
Pruning meta-trained networks for on-device adaptation
by: GAO, Dawei, et al.
Published: (2021) -
Rethinking pruning for accelerating deep inference at the edge
by: GAO, Dawei, et al.
Published: (2020) -
Evolutionary multitasking : a computer science view of cognitive multitasking
by: Ong, Yew-Soon, et al.
Published: (2021) -
Edge-computing-based knowledge distillation and multitask learning for partial discharge recognition
by: Ji, Jinsheng, et al.
Published: (2024)