The spatially-correlative loss for various image translation tasks
We propose a novel spatially-correlative loss that is simple, efficient and yet effective for preserving scene structure consistency while supporting large appearance changes during unpaired image-to-image (I2I) translation. Previous methods attempt this by using pixel-level cycle-consistency or fea...
Saved in:
Main Authors: | Zheng, Chuanxia, Cham, Tat-Jen, Cai, Jianfei |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/151225 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
T2Net : synthetic-to-realistic translation for solving single-image depth estimation tasks
by: Zheng, Chuanxia, et al.
Published: (2020) -
Modelling spatial correlation in earthquake-induced damage and its impact on regional loss estimation
by: Nguyen, Michele, et al.
Published: (2023) -
Multi-worker-aware task planning in real-time spatial crowdsourcing
by: TAO, Qian, et al.
Published: (2018) -
Investigating biological feature detectors in simple pattern recognition towards complex saliency prediction tasks
by: Cordel, Macario O., II
Published: (2018) -
Learning to share latent tasks for action recognition
by: Zhou, Q., et al.
Published: (2014)