From contexts to locality: Ultra-high resolution image segmentation via locality-aware contextual correlation

Ultra-high resolution image segmentation has raised increasing interests in recent years due to its realistic applications. In this paper, we innovate the widely used high-resolution image segmentation pipeline, in which an ultrahigh resolution image is partitioned into regular patches for local seg...

Full description

Saved in:
Bibliographic Details
Main Authors: LI, Qi, YANG, Weixiang, LIU, Wenxi, YU, Yuanlong, HE, Shengfeng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8531
https://ink.library.smu.edu.sg/context/sis_research/article/9534/viewcontent/From_Contexts_to_Locality__Ultra_High_Resolution_Image_Segmentation_via_Locality_Aware_Contextual_Correlation.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Ultra-high resolution image segmentation has raised increasing interests in recent years due to its realistic applications. In this paper, we innovate the widely used high-resolution image segmentation pipeline, in which an ultrahigh resolution image is partitioned into regular patches for local segmentation and then the local results are merged into a high-resolution semantic mask. In particular, we introduce a novel locality-aware contextual correlation based segmentation model to process local patches, where the relevance between local patch and its various contexts are jointly and complementarily utilized to handle the semantic regions with large variations. Additionally, we present a contextual semantics refinement network that associates the local segmentation result with its contextual semantics, and thus is endowed with the ability of reducing boundary artifacts and refining mask contours during the generation of final high-resolution mask. Furthermore, in comprehensive experiments, we demonstrate that our model outperforms other state-of-the-art methods in public benchmarks.