DO-GAN: A double oracle framework for generative adversarial networks

In this paper, we propose a new approach to train Gen-erative Adversarial Networks (GANs) where we deploy a double-oracle framework using the generator and discrim-inator oracles. GAN is essentially a two-player zero-sum game between the generator and the discriminator. Training GANs is challenging...

Full description

Saved in:
Bibliographic Details
Main Authors: AUNG, Aye Phyu Phye, WANG, Xinrun, YU, Runsheng, AN, Bo, JAYAVELU, Senthilnath, LI, Xiaoli
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9136
https://ink.library.smu.edu.sg/context/sis_research/article/10139/viewcontent/DO_GAN_CVPR_2022_av.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:In this paper, we propose a new approach to train Gen-erative Adversarial Networks (GANs) where we deploy a double-oracle framework using the generator and discrim-inator oracles. GAN is essentially a two-player zero-sum game between the generator and the discriminator. Training GANs is challenging as a pure Nash equilibrium may not exist and even finding the mixed Nash equilibrium is difficult as GANs have a large-scale strategy space. In DO-GAN, we extend the double oracle framework to GANs. We first generalize the players' strategies as the trained models of generator and discriminator from the best response or-acles. We then compute the meta-strategies using a linear program. For scalability of the framework where multi-ple generators and discriminator best responses are stored in the memory, we propose two solutions: 1) pruning the weakly-dominated players' strategies to keep the oracles from becoming intractable; 2) applying continual learning to retain the previous knowledge of the networks. We apply our framework to established GAN architectures such as vanilla GAN, Deep Convolutional GAN, Spectral Normalization GAN and Stacked GAN. Finally, we conduct experiments on MNIST, CIFAR-10 and CelebA datasets and show that DO-GAN variants have significant improvements in both subjective qualitative evaluation and quantitative metrics, compared with their respective GAN architectures.