Learning transferable negative prompts for out-of-distribution detection

Existing prompt learning methods have shown certain capabilities in Out-of-Distribution (OOD) detection, but the lack of OOD images in the target dataset in their training can lead to mismatches between OOD images and In-Distribution (ID) categories, resulting in a high false positive rate. To addre...

Full description

Saved in:
Bibliographic Details
Main Authors: LI, Tianqi, PANG, Guansong, BAI, Xiao, MIAO, Wenjun, ZHENG, Jin
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9759
https://ink.library.smu.edu.sg/context/sis_research/article/10759/viewcontent/2404.03248v1.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10759
record_format dspace
spelling sg-smu-ink.sis_research-107592024-12-16T02:54:19Z Learning transferable negative prompts for out-of-distribution detection LI, Tianqi PANG, Guansong BAI, Xiao MIAO, Wenjun ZHENG, Jin Existing prompt learning methods have shown certain capabilities in Out-of-Distribution (OOD) detection, but the lack of OOD images in the target dataset in their training can lead to mismatches between OOD images and In-Distribution (ID) categories, resulting in a high false positive rate. To address this issue, we introduce a novel OOD detection method, named ‘NegPrompt’, to learn a set of negative prompts, each representing a negative connotation of a given class label, for delineating the boundaries between ID and OOD images. It learns such negative prompts with ID data only, without any reliance on external out-lier data. Further, current methods assume the availability of samples of all ID classes, rendering them ineffective in open-vocabulary learning scenarios where the inference stage can contain novel ID classes not present during training. In contrast, our learned negative prompts are transferable to novel class labels. Experiments on various ImageNet benchmarks show that NegPrompt surpasses state-of-the-art prompt-learning-based OOD detection methods and maintains a consistent lead in hard OOD detection in closed- and open-vocabulary classification scenarios. 2024-06-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9759 info:doi/10.1109/CVPR52733.2024.01665 https://ink.library.smu.edu.sg/context/sis_research/article/10759/viewcontent/2404.03248v1.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Out-of-Distribution detection OOD detection Open-vocabulary learning Closed-vocabulary classification Open-vocabulary classification Artificial Intelligence and Robotics Computer Sciences
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Out-of-Distribution detection
OOD detection
Open-vocabulary learning
Closed-vocabulary classification
Open-vocabulary classification
Artificial Intelligence and Robotics
Computer Sciences
spellingShingle Out-of-Distribution detection
OOD detection
Open-vocabulary learning
Closed-vocabulary classification
Open-vocabulary classification
Artificial Intelligence and Robotics
Computer Sciences
LI, Tianqi
PANG, Guansong
BAI, Xiao
MIAO, Wenjun
ZHENG, Jin
Learning transferable negative prompts for out-of-distribution detection
description Existing prompt learning methods have shown certain capabilities in Out-of-Distribution (OOD) detection, but the lack of OOD images in the target dataset in their training can lead to mismatches between OOD images and In-Distribution (ID) categories, resulting in a high false positive rate. To address this issue, we introduce a novel OOD detection method, named ‘NegPrompt’, to learn a set of negative prompts, each representing a negative connotation of a given class label, for delineating the boundaries between ID and OOD images. It learns such negative prompts with ID data only, without any reliance on external out-lier data. Further, current methods assume the availability of samples of all ID classes, rendering them ineffective in open-vocabulary learning scenarios where the inference stage can contain novel ID classes not present during training. In contrast, our learned negative prompts are transferable to novel class labels. Experiments on various ImageNet benchmarks show that NegPrompt surpasses state-of-the-art prompt-learning-based OOD detection methods and maintains a consistent lead in hard OOD detection in closed- and open-vocabulary classification scenarios.
format text
author LI, Tianqi
PANG, Guansong
BAI, Xiao
MIAO, Wenjun
ZHENG, Jin
author_facet LI, Tianqi
PANG, Guansong
BAI, Xiao
MIAO, Wenjun
ZHENG, Jin
author_sort LI, Tianqi
title Learning transferable negative prompts for out-of-distribution detection
title_short Learning transferable negative prompts for out-of-distribution detection
title_full Learning transferable negative prompts for out-of-distribution detection
title_fullStr Learning transferable negative prompts for out-of-distribution detection
title_full_unstemmed Learning transferable negative prompts for out-of-distribution detection
title_sort learning transferable negative prompts for out-of-distribution detection
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9759
https://ink.library.smu.edu.sg/context/sis_research/article/10759/viewcontent/2404.03248v1.pdf
_version_ 1819113130392813568