Emergent semantic segmentation: training-free dense-label-free extraction from vision-language models

From an enormous amount of image-text pairs, large-scale vision-language models (VLMs) learn to implicitly associate image regions with words, which is vital for tasks such as image captioning and visual question answering. However, leveraging such pre-trained models for open-vocabulary semantic s...

Full description

Saved in:
Bibliographic Details
Main Author: Luo, Jiayun
Other Authors: Li Boyang
Format: Thesis-Master by Research
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175765
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-175765
record_format dspace
spelling sg-ntu-dr.10356-1757652024-06-03T06:51:20Z Emergent semantic segmentation: training-free dense-label-free extraction from vision-language models Luo, Jiayun Li Boyang School of Computer Science and Engineering boyang.li@ntu.edu.sg Computer and Information Science Vision-language model Open-vocabulary semantic segementation From an enormous amount of image-text pairs, large-scale vision-language models (VLMs) learn to implicitly associate image regions with words, which is vital for tasks such as image captioning and visual question answering. However, leveraging such pre-trained models for open-vocabulary semantic segmentation remains a challenge. In this thesis, we propose a simple, yet extremely effective, training-free technique, Plug-and-Play Open-Vocabulary Semantic Segmentation (PnP-OVSS) for this task. PnP-OVSS leverages a VLM with direct text-to-image cross-attention and an image-text matching loss to produce semantic segmentation. However, cross-attention alone tends to over-segment, whereas cross-attention plus GradCAM tend to under-segment. To alleviate this issue, we introduce Salience Dropout; by iteratively dropping patches that the model is most attentive to, we are able to better resolve the entire extent of the segmentation mask. PnP-OVSS does not require any neural network training and performs hyperparameter tuning without the need for any segmentation annotations, even for a validation set. PnP-OVSS demonstrates substantial improvements over comparable baselines (+29.4\% on PASCAL VOC, +13.2\% on PASCAL Context, +14.0\% mIoU on MS COCO, +2.4\% on COCO Stuff) and even outperforms most baselines that conduct additional network training on top of pretrained VLMs. Master's degree 2024-05-06T06:53:03Z 2024-05-06T06:53:03Z 2024 Thesis-Master by Research Luo, J. (2024). Emergent semantic segmentation: training-free dense-label-free extraction from vision-language models. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175765 https://hdl.handle.net/10356/175765 10.32657/10356/175765 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Vision-language model
Open-vocabulary semantic segementation
spellingShingle Computer and Information Science
Vision-language model
Open-vocabulary semantic segementation
Luo, Jiayun
Emergent semantic segmentation: training-free dense-label-free extraction from vision-language models
description From an enormous amount of image-text pairs, large-scale vision-language models (VLMs) learn to implicitly associate image regions with words, which is vital for tasks such as image captioning and visual question answering. However, leveraging such pre-trained models for open-vocabulary semantic segmentation remains a challenge. In this thesis, we propose a simple, yet extremely effective, training-free technique, Plug-and-Play Open-Vocabulary Semantic Segmentation (PnP-OVSS) for this task. PnP-OVSS leverages a VLM with direct text-to-image cross-attention and an image-text matching loss to produce semantic segmentation. However, cross-attention alone tends to over-segment, whereas cross-attention plus GradCAM tend to under-segment. To alleviate this issue, we introduce Salience Dropout; by iteratively dropping patches that the model is most attentive to, we are able to better resolve the entire extent of the segmentation mask. PnP-OVSS does not require any neural network training and performs hyperparameter tuning without the need for any segmentation annotations, even for a validation set. PnP-OVSS demonstrates substantial improvements over comparable baselines (+29.4\% on PASCAL VOC, +13.2\% on PASCAL Context, +14.0\% mIoU on MS COCO, +2.4\% on COCO Stuff) and even outperforms most baselines that conduct additional network training on top of pretrained VLMs.
author2 Li Boyang
author_facet Li Boyang
Luo, Jiayun
format Thesis-Master by Research
author Luo, Jiayun
author_sort Luo, Jiayun
title Emergent semantic segmentation: training-free dense-label-free extraction from vision-language models
title_short Emergent semantic segmentation: training-free dense-label-free extraction from vision-language models
title_full Emergent semantic segmentation: training-free dense-label-free extraction from vision-language models
title_fullStr Emergent semantic segmentation: training-free dense-label-free extraction from vision-language models
title_full_unstemmed Emergent semantic segmentation: training-free dense-label-free extraction from vision-language models
title_sort emergent semantic segmentation: training-free dense-label-free extraction from vision-language models
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/175765
_version_ 1809139022642020352