Learning invariant and uniformly distributed feature space for multi-view generation?

Multi-view generation from a given single view is a significant, yet challenging problem with broad applications in the field of virtual reality and robotics. Existing methods mainly utilize the basic GAN-based structure to help directly learn a mapping between two different views. Although they can...

Full description

Saved in:
Bibliographic Details
Main Authors: LU, Yuqin, CAO, Jiangzhong, HE, Shengfeng, GUO, Jiangtao, ZHOU, Qiliang, DAI, Qingyun
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7870
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8873
record_format dspace
spelling sg-smu-ink.sis_research-88732023-06-15T09:00:05Z Learning invariant and uniformly distributed feature space for multi-view generation? LU, Yuqin CAO, Jiangzhong HE, Shengfeng GUO, Jiangtao ZHOU, Qiliang DAI, Qingyun Multi-view generation from a given single view is a significant, yet challenging problem with broad applications in the field of virtual reality and robotics. Existing methods mainly utilize the basic GAN-based structure to help directly learn a mapping between two different views. Although they can produce plausible results, they still struggle to recover faithful details and fail to generalize to unseen data. In this paper, we propose to learn invariant and uniformly distributed representations for multi-view generation with an "Alignment"and a "Uniformity"constraint (AU-GAN). Our method is inspired by the idea of contrastive learning to learn a well-regulated feature space for multi-view generation. Specifically, our feature extractor is supposed to extract view-invariant representation that captures intrinsic and essential knowledge of the input, and distribute all representations evenly throughout the space to enable the network to "explore"the entire feature space, thus avoiding poor generative ability on unseen data. Extensive experiments on multi-view generation for both faces and objects demonstrate the generative capability of our proposed method on generating realistic and high-quality views, especially for unseen data in wild conditions. 2023-01-17T08:00:00Z text https://ink.library.smu.edu.sg/sis_research/7870 info:doi/10.1016/j.inffus.2023.01.011 Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Multi-view generation Generative adversarial networks Contrastive learning Information Security
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Multi-view generation
Generative adversarial networks
Contrastive learning
Information Security
spellingShingle Multi-view generation
Generative adversarial networks
Contrastive learning
Information Security
LU, Yuqin
CAO, Jiangzhong
HE, Shengfeng
GUO, Jiangtao
ZHOU, Qiliang
DAI, Qingyun
Learning invariant and uniformly distributed feature space for multi-view generation?
description Multi-view generation from a given single view is a significant, yet challenging problem with broad applications in the field of virtual reality and robotics. Existing methods mainly utilize the basic GAN-based structure to help directly learn a mapping between two different views. Although they can produce plausible results, they still struggle to recover faithful details and fail to generalize to unseen data. In this paper, we propose to learn invariant and uniformly distributed representations for multi-view generation with an "Alignment"and a "Uniformity"constraint (AU-GAN). Our method is inspired by the idea of contrastive learning to learn a well-regulated feature space for multi-view generation. Specifically, our feature extractor is supposed to extract view-invariant representation that captures intrinsic and essential knowledge of the input, and distribute all representations evenly throughout the space to enable the network to "explore"the entire feature space, thus avoiding poor generative ability on unseen data. Extensive experiments on multi-view generation for both faces and objects demonstrate the generative capability of our proposed method on generating realistic and high-quality views, especially for unseen data in wild conditions.
format text
author LU, Yuqin
CAO, Jiangzhong
HE, Shengfeng
GUO, Jiangtao
ZHOU, Qiliang
DAI, Qingyun
author_facet LU, Yuqin
CAO, Jiangzhong
HE, Shengfeng
GUO, Jiangtao
ZHOU, Qiliang
DAI, Qingyun
author_sort LU, Yuqin
title Learning invariant and uniformly distributed feature space for multi-view generation?
title_short Learning invariant and uniformly distributed feature space for multi-view generation?
title_full Learning invariant and uniformly distributed feature space for multi-view generation?
title_fullStr Learning invariant and uniformly distributed feature space for multi-view generation?
title_full_unstemmed Learning invariant and uniformly distributed feature space for multi-view generation?
title_sort learning invariant and uniformly distributed feature space for multi-view generation?
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/7870
_version_ 1770576573162848256