Learning GANs in simultaneous game using Sinkhorn with positive features

Entropy regularized optimal transport (EOT) distance and its symmetric normalization, known as the Sinkhorn divergence, offer smooth and continuous metrized weak-convergence distance metrics. They have excellent geometric properties and are useful to compare probability distributions in some generat...

Full description

Saved in:
Bibliographic Details
Main Authors: Risman Adnan, Muchlisin Adi Saputra, Junaidillah Fadlil, Ezerman, Martianus Frederic, Muhamad Iqbal, Tjan Basaruddin
Other Authors: School of Physical and Mathematical Sciences
Format: Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/155575
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-155575
record_format dspace
spelling sg-ntu-dr.10356-1555752023-02-28T20:07:59Z Learning GANs in simultaneous game using Sinkhorn with positive features Risman Adnan Muchlisin Adi Saputra Junaidillah Fadlil Ezerman, Martianus Frederic Muhamad Iqbal Tjan Basaruddin School of Physical and Mathematical Sciences Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Generative Adversarial Network Entropy Regularized Optimal Transport Entropy regularized optimal transport (EOT) distance and its symmetric normalization, known as the Sinkhorn divergence, offer smooth and continuous metrized weak-convergence distance metrics. They have excellent geometric properties and are useful to compare probability distributions in some generative adversarial network (GAN) models. Computing them using the original Sinkhorn matrix scaling algorithm is still expensive. The running time is quadratic at O(n2) in the size n of the training dataset. This work investigates the problem of accelerating the GAN training when Sinkhorn divergence is used as a minimax objective. Let G be a Gaussian map from the ground space onto the positive orthant Rr + with r ≪ n. To speed up the divergence computation, we propose the use of c(x, y) = -ε log ⟨G(x), G(y)⟩ as the ground cost. This approximation, known as Sinkhorn with positive features, brings down the running time of the Sinkhorn matrix scaling algorithm to O(r n), which is linear in n. To solve the minimax optimization in GAN, we put forward a more efficient simultaneous stochastic gradient descent-ascent (SimSGDA) algorithm in place of the standard sequential gradient techniques. Empirical evidence shows that our model, trained using SimSGDA on the DCGAN neural architecture on tiny-coloured Cats and CelebA datasets, converges to stationary points. These are the local Nash equilibrium points. We carried out numerical experiments to confirm that our model is computationally stable. It generates samples of comparable quality to those produced by prior Sinkhorn and Wasserstein GANs. Further simulations, assessed on the similarity index measures (SSIM), show that our model’s empirical convergence rate is comparable to that of WGAN-GP. Published version 2022-07-27T02:25:52Z 2022-07-27T02:25:52Z 2021 Journal Article Risman Adnan, Muchlisin Adi Saputra, Junaidillah Fadlil, Ezerman, M. F., Muhamad Iqbal & Tjan Basaruddin (2021). Learning GANs in simultaneous game using Sinkhorn with positive features. IEEE Access, 9, 144361-144374. https://dx.doi.org/10.1109/ACCESS.2021.3120128 2169-3536 https://hdl.handle.net/10356/155575 10.1109/ACCESS.2021.3120128 2-s2.0-85117836478 9 144361 144374 en IEEE Access © 2021 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Generative Adversarial Network
Entropy Regularized Optimal Transport
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Generative Adversarial Network
Entropy Regularized Optimal Transport
Risman Adnan
Muchlisin Adi Saputra
Junaidillah Fadlil
Ezerman, Martianus Frederic
Muhamad Iqbal
Tjan Basaruddin
Learning GANs in simultaneous game using Sinkhorn with positive features
description Entropy regularized optimal transport (EOT) distance and its symmetric normalization, known as the Sinkhorn divergence, offer smooth and continuous metrized weak-convergence distance metrics. They have excellent geometric properties and are useful to compare probability distributions in some generative adversarial network (GAN) models. Computing them using the original Sinkhorn matrix scaling algorithm is still expensive. The running time is quadratic at O(n2) in the size n of the training dataset. This work investigates the problem of accelerating the GAN training when Sinkhorn divergence is used as a minimax objective. Let G be a Gaussian map from the ground space onto the positive orthant Rr + with r ≪ n. To speed up the divergence computation, we propose the use of c(x, y) = -ε log ⟨G(x), G(y)⟩ as the ground cost. This approximation, known as Sinkhorn with positive features, brings down the running time of the Sinkhorn matrix scaling algorithm to O(r n), which is linear in n. To solve the minimax optimization in GAN, we put forward a more efficient simultaneous stochastic gradient descent-ascent (SimSGDA) algorithm in place of the standard sequential gradient techniques. Empirical evidence shows that our model, trained using SimSGDA on the DCGAN neural architecture on tiny-coloured Cats and CelebA datasets, converges to stationary points. These are the local Nash equilibrium points. We carried out numerical experiments to confirm that our model is computationally stable. It generates samples of comparable quality to those produced by prior Sinkhorn and Wasserstein GANs. Further simulations, assessed on the similarity index measures (SSIM), show that our model’s empirical convergence rate is comparable to that of WGAN-GP.
author2 School of Physical and Mathematical Sciences
author_facet School of Physical and Mathematical Sciences
Risman Adnan
Muchlisin Adi Saputra
Junaidillah Fadlil
Ezerman, Martianus Frederic
Muhamad Iqbal
Tjan Basaruddin
format Article
author Risman Adnan
Muchlisin Adi Saputra
Junaidillah Fadlil
Ezerman, Martianus Frederic
Muhamad Iqbal
Tjan Basaruddin
author_sort Risman Adnan
title Learning GANs in simultaneous game using Sinkhorn with positive features
title_short Learning GANs in simultaneous game using Sinkhorn with positive features
title_full Learning GANs in simultaneous game using Sinkhorn with positive features
title_fullStr Learning GANs in simultaneous game using Sinkhorn with positive features
title_full_unstemmed Learning GANs in simultaneous game using Sinkhorn with positive features
title_sort learning gans in simultaneous game using sinkhorn with positive features
publishDate 2022
url https://hdl.handle.net/10356/155575
_version_ 1759854794971807744