DeshadowNet: A multi-context embedding deep network for shadow removal
Shadow removal is a challenging task as it requires the detection/annotation of shadows as well as semantic understanding of the scene. In this paper, we propose an automatic and end-to-end deep neural network (DeshadowNet) to tackle these problems in a unified manner. DeshadowNet is designed with a...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2017
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8425 https://ink.library.smu.edu.sg/context/sis_research/article/9428/viewcontent/Qu_DeshadowNet_A_Multi_Context_CVPR_2017_paper.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-9428 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-94282024-01-09T03:29:37Z DeshadowNet: A multi-context embedding deep network for shadow removal QU, Liangqiong TIAN, Jiandong HE, Shengfeng TANG, Yandong LAU, Rynson W. H. Shadow removal is a challenging task as it requires the detection/annotation of shadows as well as semantic understanding of the scene. In this paper, we propose an automatic and end-to-end deep neural network (DeshadowNet) to tackle these problems in a unified manner. DeshadowNet is designed with a multi-context architecture, where the output shadow matte is predicted by embedding information from three different perspectives. The first global network extracts shadow features from a global view. Two levels of features are derived from the global network and transferred to two parallel networks. While one extracts the appearance of the input image, the other one involves semantic understanding for final prediction. These two complementary networks generate multi-context features to obtain the shadow matte with fine local details. To evaluate the performance of the proposed method, we construct the first large scale benchmark with 3088 image pairs. Extensive experiments on two publicly available benchmarks and our large-scale benchmark show that the proposed method performs favorably against several state-of-the-art methods. 2017-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8425 info:doi/10.1109/CVPR.2017.248 https://ink.library.smu.edu.sg/context/sis_research/article/9428/viewcontent/Qu_DeshadowNet_A_Multi_Context_CVPR_2017_paper.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Benchmarking Computer vision Deep neural networks Image segmentation Semantics Artificial Intelligence and Robotics OS and Networks Systems Architecture |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Benchmarking Computer vision Deep neural networks Image segmentation Semantics Artificial Intelligence and Robotics OS and Networks Systems Architecture |
spellingShingle |
Benchmarking Computer vision Deep neural networks Image segmentation Semantics Artificial Intelligence and Robotics OS and Networks Systems Architecture QU, Liangqiong TIAN, Jiandong HE, Shengfeng TANG, Yandong LAU, Rynson W. H. DeshadowNet: A multi-context embedding deep network for shadow removal |
description |
Shadow removal is a challenging task as it requires the detection/annotation of shadows as well as semantic understanding of the scene. In this paper, we propose an automatic and end-to-end deep neural network (DeshadowNet) to tackle these problems in a unified manner. DeshadowNet is designed with a multi-context architecture, where the output shadow matte is predicted by embedding information from three different perspectives. The first global network extracts shadow features from a global view. Two levels of features are derived from the global network and transferred to two parallel networks. While one extracts the appearance of the input image, the other one involves semantic understanding for final prediction. These two complementary networks generate multi-context features to obtain the shadow matte with fine local details. To evaluate the performance of the proposed method, we construct the first large scale benchmark with 3088 image pairs. Extensive experiments on two publicly available benchmarks and our large-scale benchmark show that the proposed method performs favorably against several state-of-the-art methods. |
format |
text |
author |
QU, Liangqiong TIAN, Jiandong HE, Shengfeng TANG, Yandong LAU, Rynson W. H. |
author_facet |
QU, Liangqiong TIAN, Jiandong HE, Shengfeng TANG, Yandong LAU, Rynson W. H. |
author_sort |
QU, Liangqiong |
title |
DeshadowNet: A multi-context embedding deep network for shadow removal |
title_short |
DeshadowNet: A multi-context embedding deep network for shadow removal |
title_full |
DeshadowNet: A multi-context embedding deep network for shadow removal |
title_fullStr |
DeshadowNet: A multi-context embedding deep network for shadow removal |
title_full_unstemmed |
DeshadowNet: A multi-context embedding deep network for shadow removal |
title_sort |
deshadownet: a multi-context embedding deep network for shadow removal |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2017 |
url |
https://ink.library.smu.edu.sg/sis_research/8425 https://ink.library.smu.edu.sg/context/sis_research/article/9428/viewcontent/Qu_DeshadowNet_A_Multi_Context_CVPR_2017_paper.pdf |
_version_ |
1787590773377597440 |