Deep learning approaches for object co-segmentation and one-shot segmentation
Image co-segmentation is an active computer vision task that aims to discover and segment the shared objects given multiple images. Recently, researchers design various learning-based algorithms to handle the co-segmentation task. The main difficulty in this task is how to effectively transfer infor...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Research |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/149937 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Image co-segmentation is an active computer vision task that aims to discover and segment the shared objects given multiple images. Recently, researchers design various learning-based algorithms to handle the co-segmentation task. The main difficulty in this task is how to effectively transfer information between images to infer the common object regions. In this thesis, we present CycleSegNet, an effective and novel approach for the co-segmentation task. Our network design has two key components: a region correspondence module which is the basic operation for exchanging information between local image regions, and a cycle refinement module which utilizes ConvLSTMs to progressively update image embeddings and exchange information in a cycle manner. Experiment results on four popular benchmark datasets --- PASCAL VOC dataset, MSRC dataset, Internet dataset, and iCoseg dataset indicate that our proposed approach greatly outperforms the existing networks and achieves new state-of-the-art performance.
In addition to image co-segmentation, we also explore a method to solve one-shot segmentation with only weak supervision (bounding box). One-shot semantic segmentation has recently gained attention for its strong generalization ability to segment unseen-class images given only limited annotated image. However, existing methods in one-shot object segmentation have mainly relied on manually pixel-wise labeled segmentation masks.
The main challenge in this task is limited data and weak supervision. In this thesis, we present an effective approach, which utilizes the recent weakly-supervised semantic segmentation method to generate pseudo mask labels in the bounding box regions and then integrates the detailed information and correlation between support image and query image for solving one-shot image segmentation. Extensive experiments on the PASCAL-5i dataset show that our weakly-supervised method narrows down the performance gap between bounding box supervision and pixel-wise annotations, and performs comparably with the state-of-the-art fully-supervised one-shot methods. |
---|