Learning generalizable models for vehicle routing problems via knowledge distillation
Recent neural methods for vehicle routing problems always train and test the deep models on the same instance distribution (i.e., uniform). To tackle the consequent cross-distribution generalization concerns, we bring the knowledge distillation to this field and propose an Adaptive Multi-Distributio...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8164 https://ink.library.smu.edu.sg/context/sis_research/article/9167/viewcontent/NeurIPS_2022_learning_generalizable_models_for_vehicle_routing_problems_via_knowledge_distillation_Paper_Conference.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-9167 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-91672023-09-26T10:35:59Z Learning generalizable models for vehicle routing problems via knowledge distillation BI, Jieyi MA, Yining WANG, Jiahai CAO, Zhiguang CHEN, Jinbiao SUN, Yuan CHEE, Yeow Meng Recent neural methods for vehicle routing problems always train and test the deep models on the same instance distribution (i.e., uniform). To tackle the consequent cross-distribution generalization concerns, we bring the knowledge distillation to this field and propose an Adaptive Multi-Distribution Knowledge Distillation (AMDKD) scheme for learning more generalizable deep models. Particularly, our AMDKD leverages various knowledge from multiple teachers trained on exemplar distributions to yield a light-weight yet generalist student model. Meanwhile, we equip AMDKD with an adaptive strategy that allows the student to concentrate on difficult distributions, so as to absorb hard-to-master knowledge more effectively. Extensive experimental results show that, compared with the baseline neural methods, our AMDKD is able to achieve competitive results on both unseen in-distribution and out-of-distribution instances, which are either randomly synthesized or adopted from benchmark datasets (i.e., TSPLIB and CVRPLIB). Notably, our AMDKD is generic, and consumes less computational resources for inference. 2022-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8164 info:doi/10.48550/arXiv.2210.07686 https://ink.library.smu.edu.sg/context/sis_research/article/9167/viewcontent/NeurIPS_2022_learning_generalizable_models_for_vehicle_routing_problems_via_knowledge_distillation_Paper_Conference.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Learning systems Vehicle routing Databases and Information Systems |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Learning systems Vehicle routing Databases and Information Systems |
spellingShingle |
Learning systems Vehicle routing Databases and Information Systems BI, Jieyi MA, Yining WANG, Jiahai CAO, Zhiguang CHEN, Jinbiao SUN, Yuan CHEE, Yeow Meng Learning generalizable models for vehicle routing problems via knowledge distillation |
description |
Recent neural methods for vehicle routing problems always train and test the deep models on the same instance distribution (i.e., uniform). To tackle the consequent cross-distribution generalization concerns, we bring the knowledge distillation to this field and propose an Adaptive Multi-Distribution Knowledge Distillation (AMDKD) scheme for learning more generalizable deep models. Particularly, our AMDKD leverages various knowledge from multiple teachers trained on exemplar distributions to yield a light-weight yet generalist student model. Meanwhile, we equip AMDKD with an adaptive strategy that allows the student to concentrate on difficult distributions, so as to absorb hard-to-master knowledge more effectively. Extensive experimental results show that, compared with the baseline neural methods, our AMDKD is able to achieve competitive results on both unseen in-distribution and out-of-distribution instances, which are either randomly synthesized or adopted from benchmark datasets (i.e., TSPLIB and CVRPLIB). Notably, our AMDKD is generic, and consumes less computational resources for inference. |
format |
text |
author |
BI, Jieyi MA, Yining WANG, Jiahai CAO, Zhiguang CHEN, Jinbiao SUN, Yuan CHEE, Yeow Meng |
author_facet |
BI, Jieyi MA, Yining WANG, Jiahai CAO, Zhiguang CHEN, Jinbiao SUN, Yuan CHEE, Yeow Meng |
author_sort |
BI, Jieyi |
title |
Learning generalizable models for vehicle routing problems via knowledge distillation |
title_short |
Learning generalizable models for vehicle routing problems via knowledge distillation |
title_full |
Learning generalizable models for vehicle routing problems via knowledge distillation |
title_fullStr |
Learning generalizable models for vehicle routing problems via knowledge distillation |
title_full_unstemmed |
Learning generalizable models for vehicle routing problems via knowledge distillation |
title_sort |
learning generalizable models for vehicle routing problems via knowledge distillation |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2022 |
url |
https://ink.library.smu.edu.sg/sis_research/8164 https://ink.library.smu.edu.sg/context/sis_research/article/9167/viewcontent/NeurIPS_2022_learning_generalizable_models_for_vehicle_routing_problems_via_knowledge_distillation_Paper_Conference.pdf |
_version_ |
1779157188923621376 |