FormResNet: Formatted residual learning for image restoration
In this paper, we propose a deep CNN to tackle the image restoration problem by learning the structured residual. Previous deep learning based methods directly learn the mapping from corrupted images to clean images, and may suffer from the gradient exploding/vanishing problems of deep neural networ...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2017
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8428 https://ink.library.smu.edu.sg/context/sis_research/article/9431/viewcontent/Jiao_FormResNet_Formatted_Residual_CVPR_2017_paper.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-9431 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-94312024-01-09T03:28:40Z FormResNet: Formatted residual learning for image restoration JIAO, Jianbo TU, Wei-chih HE, Shengfeng In this paper, we propose a deep CNN to tackle the image restoration problem by learning the structured residual. Previous deep learning based methods directly learn the mapping from corrupted images to clean images, and may suffer from the gradient exploding/vanishing problems of deep neural networks. We propose to address the image restoration problem by learning the structured details and recovering the latent clean image together, from the shared information between the corrupted image and the latent image. In addition, instead of learning the pure difference (corruption), we propose to add a 'residual formatting layer' to format the residual to structured information, which allows the network to converge faster and boosts the performance. Furthermore, we propose a cross-level loss net to ensure both pixel-level accuracy and semantic-level visual quality. Evaluations on public datasets show that the proposed method outperforms existing approaches quantitatively and qualitatively. 2017-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8428 info:doi/10.1109/CVPRW.2017.140 https://ink.library.smu.edu.sg/context/sis_research/article/9431/viewcontent/Jiao_FormResNet_Formatted_Residual_CVPR_2017_paper.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Computer vision Deep learning Deep neural networks Pattern recognition Restoration Semantics Artificial Intelligence and Robotics Graphics and Human Computer Interfaces |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Computer vision Deep learning Deep neural networks Pattern recognition Restoration Semantics Artificial Intelligence and Robotics Graphics and Human Computer Interfaces |
spellingShingle |
Computer vision Deep learning Deep neural networks Pattern recognition Restoration Semantics Artificial Intelligence and Robotics Graphics and Human Computer Interfaces JIAO, Jianbo TU, Wei-chih HE, Shengfeng FormResNet: Formatted residual learning for image restoration |
description |
In this paper, we propose a deep CNN to tackle the image restoration problem by learning the structured residual. Previous deep learning based methods directly learn the mapping from corrupted images to clean images, and may suffer from the gradient exploding/vanishing problems of deep neural networks. We propose to address the image restoration problem by learning the structured details and recovering the latent clean image together, from the shared information between the corrupted image and the latent image. In addition, instead of learning the pure difference (corruption), we propose to add a 'residual formatting layer' to format the residual to structured information, which allows the network to converge faster and boosts the performance. Furthermore, we propose a cross-level loss net to ensure both pixel-level accuracy and semantic-level visual quality. Evaluations on public datasets show that the proposed method outperforms existing approaches quantitatively and qualitatively. |
format |
text |
author |
JIAO, Jianbo TU, Wei-chih HE, Shengfeng |
author_facet |
JIAO, Jianbo TU, Wei-chih HE, Shengfeng |
author_sort |
JIAO, Jianbo |
title |
FormResNet: Formatted residual learning for image restoration |
title_short |
FormResNet: Formatted residual learning for image restoration |
title_full |
FormResNet: Formatted residual learning for image restoration |
title_fullStr |
FormResNet: Formatted residual learning for image restoration |
title_full_unstemmed |
FormResNet: Formatted residual learning for image restoration |
title_sort |
formresnet: formatted residual learning for image restoration |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2017 |
url |
https://ink.library.smu.edu.sg/sis_research/8428 https://ink.library.smu.edu.sg/context/sis_research/article/9431/viewcontent/Jiao_FormResNet_Formatted_Residual_CVPR_2017_paper.pdf |
_version_ |
1787590773902934016 |