Fast scene labeling via structural inference
Scene labeling or parsing aims to assign pixelwise semantic labels for an input image. Existing CNN-based models cannot leverage the label dependencies, while RNN-based models predict labels within the local context. In this paper, we propose a fast LSTM scene labeling network via structural inferen...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7838 https://ink.library.smu.edu.sg/context/sis_research/article/8841/viewcontent/1_s2.0_S0925231221003428_main.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-8841 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-88412023-06-15T09:14:03Z Fast scene labeling via structural inference ZHANG, Huaidong HAN, Chu ZHANG, Xiaodan DU, Yong XU, Xuemiao HAN, Guoqiang QIN, Jing Shengfeng HE, Scene labeling or parsing aims to assign pixelwise semantic labels for an input image. Existing CNN-based models cannot leverage the label dependencies, while RNN-based models predict labels within the local context. In this paper, we propose a fast LSTM scene labeling network via structural inference. A minimum spanning tree is used to build the image structure for constructing semantic relationships. This structure allows efficient generation of direct parent-child dependencies for arbitrary levels of superpixels, and thus structural relationships can be learned with LSTM. In particular, we propose a bi-directional recurrent network to model the information flow along the parent-child path. In this way, the recurrent units in both coarse and fine levels can mutually transfer the global and local context information in the entire image structure. The proposed network is extremely fast, and it is 2.5x faster than the state-of-the-art RNN-based models. Extensive expseriments demonstrate that the proposed method provides a significant improvement in learning the label dependencies, and it outperforms state-of-the-art methods on different benchmarks. (C) 2021 Elsevier B.V. All rights reserved. 2021-03-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7838 info:doi/10.1016/j.neucom.2020.12.134 https://ink.library.smu.edu.sg/context/sis_research/article/8841/viewcontent/1_s2.0_S0925231221003428_main.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University LSTM Structural inference Scene labeling Information Security |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
LSTM Structural inference Scene labeling Information Security |
spellingShingle |
LSTM Structural inference Scene labeling Information Security ZHANG, Huaidong HAN, Chu ZHANG, Xiaodan DU, Yong XU, Xuemiao HAN, Guoqiang QIN, Jing Shengfeng HE, Fast scene labeling via structural inference |
description |
Scene labeling or parsing aims to assign pixelwise semantic labels for an input image. Existing CNN-based models cannot leverage the label dependencies, while RNN-based models predict labels within the local context. In this paper, we propose a fast LSTM scene labeling network via structural inference. A minimum spanning tree is used to build the image structure for constructing semantic relationships. This structure allows efficient generation of direct parent-child dependencies for arbitrary levels of superpixels, and thus structural relationships can be learned with LSTM. In particular, we propose a bi-directional recurrent network to model the information flow along the parent-child path. In this way, the recurrent units in both coarse and fine levels can mutually transfer the global and local context information in the entire image structure. The proposed network is extremely fast, and it is 2.5x faster than the state-of-the-art RNN-based models. Extensive expseriments demonstrate that the proposed method provides a significant improvement in learning the label dependencies, and it outperforms state-of-the-art methods on different benchmarks. (C) 2021 Elsevier B.V. All rights reserved. |
format |
text |
author |
ZHANG, Huaidong HAN, Chu ZHANG, Xiaodan DU, Yong XU, Xuemiao HAN, Guoqiang QIN, Jing Shengfeng HE, |
author_facet |
ZHANG, Huaidong HAN, Chu ZHANG, Xiaodan DU, Yong XU, Xuemiao HAN, Guoqiang QIN, Jing Shengfeng HE, |
author_sort |
ZHANG, Huaidong |
title |
Fast scene labeling via structural inference |
title_short |
Fast scene labeling via structural inference |
title_full |
Fast scene labeling via structural inference |
title_fullStr |
Fast scene labeling via structural inference |
title_full_unstemmed |
Fast scene labeling via structural inference |
title_sort |
fast scene labeling via structural inference |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2021 |
url |
https://ink.library.smu.edu.sg/sis_research/7838 https://ink.library.smu.edu.sg/context/sis_research/article/8841/viewcontent/1_s2.0_S0925231221003428_main.pdf |
_version_ |
1770576553657237504 |