Convergence of non-convex non-concave GANs using sinkhorn divergence

Sinkhorn divergence is a symmetric normalization of entropic regularized optimal transport. It is a smooth and continuous metrized weak-convergence with excellent geometric properties. We use it as an alternative for the minimax objective function in formulating generative adversarial networks. The...

Full description

Saved in:
Bibliographic Details
Main Authors: Adnan, Risman, Saputra, Muchlisin Adi, Fadlil, Junaidillah, Ezerman, Martianus Frederic, Iqbal, Muhamad, Basaruddin, Tjan
Other Authors: School of Physical and Mathematical Sciences
Format: Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/154075
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-154075
record_format dspace
spelling sg-ntu-dr.10356-1540752023-02-28T19:43:20Z Convergence of non-convex non-concave GANs using sinkhorn divergence Adnan, Risman Saputra, Muchlisin Adi Fadlil, Junaidillah Ezerman, Martianus Frederic Iqbal, Muhamad Basaruddin, Tjan School of Physical and Mathematical Sciences Science::Physics Convergence Generative Adversarial Network Sinkhorn divergence is a symmetric normalization of entropic regularized optimal transport. It is a smooth and continuous metrized weak-convergence with excellent geometric properties. We use it as an alternative for the minimax objective function in formulating generative adversarial networks. The optimization is defined with Sinkhorn divergence as the objective, under the non-convex and non-concave condition. This work focuses on the optimization's convergence and stability. We propose a first order sequential stochastic gradient descent ascent (SeqSGDA) algorithm. Under some mild approximations, the learning converges to local minimax points. Using the structural similarity index measure (SSIM), we supply a non-asymptotic analysis of the algorithm's convergence rate. Empirical evidences show a convergence rate, which is inversely proportional to the number of iterations, when tested on tiny colour datasets Cats and CelebA on the deep convolutional generative adversarial networks and ResNet neural architectures. The entropy regularization parameter $\varepsilon $ is approximated to the SSIM tolerance $\epsilon $. We determine that the iteration complexity to return to an $\epsilon $ -stationary point to be $\mathcal {O}\left ({\kappa \, \log (\epsilon ^{-1})}\right)$ , where $\kappa $ is a value that depends on the Sinkhorn divergence's convexity and the minimax step ratio in the SeqSGDA algorithm. Published version 2022-02-15T05:03:19Z 2022-02-15T05:03:19Z 2021 Journal Article Adnan, R., Saputra, M. A., Fadlil, J., Ezerman, M. F., Iqbal, M. & Basaruddin, T. (2021). Convergence of non-convex non-concave GANs using sinkhorn divergence. IEEE Access, 9, 67595-67609. https://dx.doi.org/10.1109/ACCESS.2021.3074943 2169-3536 https://hdl.handle.net/10356/154075 10.1109/ACCESS.2021.3074943 2-s2.0-85104665079 9 67595 67609 en IEEE Access © 2021 IEEE. This journal is 100% open access, which means that all content is freely available without charge to users or their institutions. All articles accepted after 12 June 2019 are published under a CC BY 4.0 license, and the author retains copyright. Users are allowed to read, download, copy, distribute, print, search, or link to the full texts of the articles, or use them for any other lawful purpose, as long as proper attribution is given. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Science::Physics
Convergence
Generative Adversarial Network
spellingShingle Science::Physics
Convergence
Generative Adversarial Network
Adnan, Risman
Saputra, Muchlisin Adi
Fadlil, Junaidillah
Ezerman, Martianus Frederic
Iqbal, Muhamad
Basaruddin, Tjan
Convergence of non-convex non-concave GANs using sinkhorn divergence
description Sinkhorn divergence is a symmetric normalization of entropic regularized optimal transport. It is a smooth and continuous metrized weak-convergence with excellent geometric properties. We use it as an alternative for the minimax objective function in formulating generative adversarial networks. The optimization is defined with Sinkhorn divergence as the objective, under the non-convex and non-concave condition. This work focuses on the optimization's convergence and stability. We propose a first order sequential stochastic gradient descent ascent (SeqSGDA) algorithm. Under some mild approximations, the learning converges to local minimax points. Using the structural similarity index measure (SSIM), we supply a non-asymptotic analysis of the algorithm's convergence rate. Empirical evidences show a convergence rate, which is inversely proportional to the number of iterations, when tested on tiny colour datasets Cats and CelebA on the deep convolutional generative adversarial networks and ResNet neural architectures. The entropy regularization parameter $\varepsilon $ is approximated to the SSIM tolerance $\epsilon $. We determine that the iteration complexity to return to an $\epsilon $ -stationary point to be $\mathcal {O}\left ({\kappa \, \log (\epsilon ^{-1})}\right)$ , where $\kappa $ is a value that depends on the Sinkhorn divergence's convexity and the minimax step ratio in the SeqSGDA algorithm.
author2 School of Physical and Mathematical Sciences
author_facet School of Physical and Mathematical Sciences
Adnan, Risman
Saputra, Muchlisin Adi
Fadlil, Junaidillah
Ezerman, Martianus Frederic
Iqbal, Muhamad
Basaruddin, Tjan
format Article
author Adnan, Risman
Saputra, Muchlisin Adi
Fadlil, Junaidillah
Ezerman, Martianus Frederic
Iqbal, Muhamad
Basaruddin, Tjan
author_sort Adnan, Risman
title Convergence of non-convex non-concave GANs using sinkhorn divergence
title_short Convergence of non-convex non-concave GANs using sinkhorn divergence
title_full Convergence of non-convex non-concave GANs using sinkhorn divergence
title_fullStr Convergence of non-convex non-concave GANs using sinkhorn divergence
title_full_unstemmed Convergence of non-convex non-concave GANs using sinkhorn divergence
title_sort convergence of non-convex non-concave gans using sinkhorn divergence
publishDate 2022
url https://hdl.handle.net/10356/154075
_version_ 1759855339996446720