Sliced Wasserstein generative models
In generative modeling, the Wasserstein distance (WD) has emerged as a useful metric to measure the discrepancy between generated and real data distributions. Unfortunately, it is challenging to approximate the WD of high-dimensional distributions. In contrast, the sliced Wasserstein distance (SWD)...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2019
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6401 https://ink.library.smu.edu.sg/context/sis_research/article/7404/viewcontent/Sliced_Wasserstein_Generative_Models.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-7404 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-74042023-08-03T01:07:23Z Sliced Wasserstein generative models WU, Jiqing HUANG, Zhiwu ACHARYA, Dinesh LI, Wen THOMA, Janine PAUDEL, Danda Pani VAN GOOL, Luc In generative modeling, the Wasserstein distance (WD) has emerged as a useful metric to measure the discrepancy between generated and real data distributions. Unfortunately, it is challenging to approximate the WD of high-dimensional distributions. In contrast, the sliced Wasserstein distance (SWD) factorizes high-dimensional distributions into their multiple one-dimensional marginal distributions and is thus easier to approximate. In this paper, we introduce novel approximations of the primal and dual SWD. Instead of using a large number of random projections, as it is done by conventional SWD approximation methods, we propose to approximate SWDs with a small number of parameterized orthogonal projections in an end-to-end deep learning fashion. As concrete applications of our SWD approximations, we design two types of differentiable SWD blocks to equip modern generative frameworks---Auto-Encoders (AE) and Generative Adversarial Networks (GAN). In the experiments, we not only show the superiority of the proposed generative models on standard image synthesis benchmarks, but also demonstrate the state-of-the-art performance on challenging high resolution image and video generation in an unsupervised manner. 2019-06-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6401 info:doi/10.1109/CVPR.2019.00383 https://ink.library.smu.edu.sg/context/sis_research/article/7404/viewcontent/Sliced_Wasserstein_Generative_Models.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Deep Learning Image and Video Synthesis Optimization Methods Databases and Information Systems Graphics and Human Computer Interfaces |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Deep Learning Image and Video Synthesis Optimization Methods Databases and Information Systems Graphics and Human Computer Interfaces |
spellingShingle |
Deep Learning Image and Video Synthesis Optimization Methods Databases and Information Systems Graphics and Human Computer Interfaces WU, Jiqing HUANG, Zhiwu ACHARYA, Dinesh LI, Wen THOMA, Janine PAUDEL, Danda Pani VAN GOOL, Luc Sliced Wasserstein generative models |
description |
In generative modeling, the Wasserstein distance (WD) has emerged as a useful metric to measure the discrepancy between generated and real data distributions. Unfortunately, it is challenging to approximate the WD of high-dimensional distributions. In contrast, the sliced Wasserstein distance (SWD) factorizes high-dimensional distributions into their multiple one-dimensional marginal distributions and is thus easier to approximate. In this paper, we introduce novel approximations of the primal and dual SWD. Instead of using a large number of random projections, as it is done by conventional SWD approximation methods, we propose to approximate SWDs with a small number of parameterized orthogonal projections in an end-to-end deep learning fashion. As concrete applications of our SWD approximations, we design two types of differentiable SWD blocks to equip modern generative frameworks---Auto-Encoders (AE) and Generative Adversarial Networks (GAN). In the experiments, we not only show the superiority of the proposed generative models on standard image synthesis benchmarks, but also demonstrate the state-of-the-art performance on challenging high resolution image and video generation in an unsupervised manner. |
format |
text |
author |
WU, Jiqing HUANG, Zhiwu ACHARYA, Dinesh LI, Wen THOMA, Janine PAUDEL, Danda Pani VAN GOOL, Luc |
author_facet |
WU, Jiqing HUANG, Zhiwu ACHARYA, Dinesh LI, Wen THOMA, Janine PAUDEL, Danda Pani VAN GOOL, Luc |
author_sort |
WU, Jiqing |
title |
Sliced Wasserstein generative models |
title_short |
Sliced Wasserstein generative models |
title_full |
Sliced Wasserstein generative models |
title_fullStr |
Sliced Wasserstein generative models |
title_full_unstemmed |
Sliced Wasserstein generative models |
title_sort |
sliced wasserstein generative models |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2019 |
url |
https://ink.library.smu.edu.sg/sis_research/6401 https://ink.library.smu.edu.sg/context/sis_research/article/7404/viewcontent/Sliced_Wasserstein_Generative_Models.pdf |
_version_ |
1773551429678530560 |