Off-policy reinforcement learning for efficient and effective GAN architecture search
In this paper, we introduce a new reinforcement learning (RL) based neural architecture search (NAS) methodology for effective and efficient generative adversarial network (GAN) architecture search. The key idea is to formulate the GAN architecture search problem as a Markov decision process (MDP) f...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2020
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6258 https://ink.library.smu.edu.sg/context/sis_research/article/7261/viewcontent/Off_PolicyReinforcementLearnin.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-7261 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-72612021-11-10T04:09:26Z Off-policy reinforcement learning for efficient and effective GAN architecture search YUAN, Tian QIN, Wang HUANG, Zhiwu LI, Wen DAI, Dengxin YANG, Minghao WANG, Jun FINK, Olga In this paper, we introduce a new reinforcement learning (RL) based neural architecture search (NAS) methodology for effective and efficient generative adversarial network (GAN) architecture search. The key idea is to formulate the GAN architecture search problem as a Markov decision process (MDP) for smoother architecture sampling, which enables a more effective RL-based search algorithm by targeting the potential global optimal architecture. To improve efficiency, we exploit an off-policy GAN architecture search algorithm that makes efficient use of the samples generated by previous policies. Evaluation on two standard benchmark datasets (i.e., CIFAR-10 and STL-10) demonstrates that the proposed method is able to discover highly competitive architectures for generally better image generation results with a considerably reduced computational burden: 7 GPU hours. 2020-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6258 info:doi/10.1007/978-3-030-58571-6_11 https://ink.library.smu.edu.sg/context/sis_research/article/7261/viewcontent/Off_PolicyReinforcementLearnin.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Generative adversarial networks; Markov decision process; Neural architecture search; Off-policy; Reinforcement learning Artificial Intelligence and Robotics Systems Architecture |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Generative adversarial networks; Markov decision process; Neural architecture search; Off-policy; Reinforcement learning Artificial Intelligence and Robotics Systems Architecture |
spellingShingle |
Generative adversarial networks; Markov decision process; Neural architecture search; Off-policy; Reinforcement learning Artificial Intelligence and Robotics Systems Architecture YUAN, Tian QIN, Wang HUANG, Zhiwu LI, Wen DAI, Dengxin YANG, Minghao WANG, Jun FINK, Olga Off-policy reinforcement learning for efficient and effective GAN architecture search |
description |
In this paper, we introduce a new reinforcement learning (RL) based neural architecture search (NAS) methodology for effective and efficient generative adversarial network (GAN) architecture search. The key idea is to formulate the GAN architecture search problem as a Markov decision process (MDP) for smoother architecture sampling, which enables a more effective RL-based search algorithm by targeting the potential global optimal architecture. To improve efficiency, we exploit an off-policy GAN architecture search algorithm that makes efficient use of the samples generated by previous policies. Evaluation on two standard benchmark datasets (i.e., CIFAR-10 and STL-10) demonstrates that the proposed method is able to discover highly competitive architectures for generally better image generation results with a considerably reduced computational burden: 7 GPU hours. |
format |
text |
author |
YUAN, Tian QIN, Wang HUANG, Zhiwu LI, Wen DAI, Dengxin YANG, Minghao WANG, Jun FINK, Olga |
author_facet |
YUAN, Tian QIN, Wang HUANG, Zhiwu LI, Wen DAI, Dengxin YANG, Minghao WANG, Jun FINK, Olga |
author_sort |
YUAN, Tian |
title |
Off-policy reinforcement learning for efficient and effective GAN architecture search |
title_short |
Off-policy reinforcement learning for efficient and effective GAN architecture search |
title_full |
Off-policy reinforcement learning for efficient and effective GAN architecture search |
title_fullStr |
Off-policy reinforcement learning for efficient and effective GAN architecture search |
title_full_unstemmed |
Off-policy reinforcement learning for efficient and effective GAN architecture search |
title_sort |
off-policy reinforcement learning for efficient and effective gan architecture search |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2020 |
url |
https://ink.library.smu.edu.sg/sis_research/6258 https://ink.library.smu.edu.sg/context/sis_research/article/7261/viewcontent/Off_PolicyReinforcementLearnin.pdf |
_version_ |
1770575911926628352 |