CRCNet: few-shot segmentation with cross-reference and region–global conditional networks
Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images. In this paper, we propose a Cross-Reference and Local–Global Conditional Networks (CRCNet) for few-shot segmentation. Unlike previous works that only predict the query i...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/170422 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-170422 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1704222023-09-12T01:51:43Z CRCNet: few-shot segmentation with cross-reference and region–global conditional networks Liu, Weide Zhang, Chi Lin, Guosheng Liu, Fayao School of Computer Science and Engineering Institute for Infocomm Research, A*STAR Engineering::Computer science and engineering Few Shot Learning Semantic Segmentation Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images. In this paper, we propose a Cross-Reference and Local–Global Conditional Networks (CRCNet) for few-shot segmentation. Unlike previous works that only predict the query image’s mask, our proposed model concurrently makes predictions for both the support image and the query image. Our network can better find the co-occurrent objects in the two images with a cross-reference mechanism, thus helping the few-shot segmentation task. To further improve feature comparison, we develop a local-global conditional module to capture both global and local relations. We also develop a mask refinement module to refine the prediction of the foreground regions recurrently. Experiments on the PASCAL VOC 2012, MS COCO, and FSS-1000 datasets show that our network achieves new state-of-the-art performance. Agency for Science, Technology and Research (A*STAR) Ministry of Education (MOE) National Research Foundation (NRF) This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-RP-2018-003), the Ministry of Education, Singapore, under its Academic Research Fund Tier 2 (MOE-T2EP20220-0007) and Tier 1 (RG95/20). This research is also partly supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funds (Grant No. A20H6b0151). 2023-09-12T01:51:43Z 2023-09-12T01:51:43Z 2022 Journal Article Liu, W., Zhang, C., Lin, G. & Liu, F. (2022). CRCNet: few-shot segmentation with cross-reference and region–global conditional networks. International Journal of Computer Vision, 130(12), 3140-3157. https://dx.doi.org/10.1007/s11263-022-01677-7 0920-5691 https://hdl.handle.net/10356/170422 10.1007/s11263-022-01677-7 2-s2.0-85139191737 12 130 3140 3157 en AISG-RP-2018-003 MOE-T2EP20220-0007 RG95/20 A20H6b0151 International Journal of Computer Vision © 2022 The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Few Shot Learning Semantic Segmentation |
spellingShingle |
Engineering::Computer science and engineering Few Shot Learning Semantic Segmentation Liu, Weide Zhang, Chi Lin, Guosheng Liu, Fayao CRCNet: few-shot segmentation with cross-reference and region–global conditional networks |
description |
Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images. In this paper, we propose a Cross-Reference and Local–Global Conditional Networks (CRCNet) for few-shot segmentation. Unlike previous works that only predict the query image’s mask, our proposed model concurrently makes predictions for both the support image and the query image. Our network can better find the co-occurrent objects in the two images with a cross-reference mechanism, thus helping the few-shot segmentation task. To further improve feature comparison, we develop a local-global conditional module to capture both global and local relations. We also develop a mask refinement module to refine the prediction of the foreground regions recurrently. Experiments on the PASCAL VOC 2012, MS COCO, and FSS-1000 datasets show that our network achieves new state-of-the-art performance. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Liu, Weide Zhang, Chi Lin, Guosheng Liu, Fayao |
format |
Article |
author |
Liu, Weide Zhang, Chi Lin, Guosheng Liu, Fayao |
author_sort |
Liu, Weide |
title |
CRCNet: few-shot segmentation with cross-reference and region–global conditional networks |
title_short |
CRCNet: few-shot segmentation with cross-reference and region–global conditional networks |
title_full |
CRCNet: few-shot segmentation with cross-reference and region–global conditional networks |
title_fullStr |
CRCNet: few-shot segmentation with cross-reference and region–global conditional networks |
title_full_unstemmed |
CRCNet: few-shot segmentation with cross-reference and region–global conditional networks |
title_sort |
crcnet: few-shot segmentation with cross-reference and region–global conditional networks |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/170422 |
_version_ |
1779156772388339712 |