Long-tailed out-of-distribution detection via normalized outlier distribution adaptation

Onekeychallenge in Out-of-Distribution (OOD) detection is the absence of groundtruth OOD samples during training. One principled approach to address this issue is to use samples from external datasets as outliers (i.e., pseudo OOD samples) to train OOD detectors. However, we find empirically that th...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلفون الرئيسيون: MIAO, Wenjun, PANG, Guansong, ZHENG, Jin, BAI, Xiao
التنسيق: text
اللغة:English
منشور في: Institutional Knowledge at Singapore Management University 2024
الموضوعات:
الوصول للمادة أونلاين:https://ink.library.smu.edu.sg/sis_research/9877
https://ink.library.smu.edu.sg/context/sis_research/article/10877/viewcontent/10274_Long_Tailed_Out_of_Distr.pdf
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Singapore Management University
اللغة: English
الوصف
الملخص:Onekeychallenge in Out-of-Distribution (OOD) detection is the absence of groundtruth OOD samples during training. One principled approach to address this issue is to use samples from external datasets as outliers (i.e., pseudo OOD samples) to train OOD detectors. However, we find empirically that the outlier samples often present a distribution shift compared to the true OOD samples, especially in LongTailed Recognition (LTR) scenarios, where ID classes are heavily imbalanced, i.e., the true OOD samples exhibit very different probability distribution to the head and tailed ID classes from the outliers. In this work, we propose a novel approach, namely normalized outlier distribution adaptation (AdaptOD), to tackle this distribution shift problem. One of its key components is dynamic outlier distribution adaptation that effectively adapts a vanilla outlier distribution based on the outlier samples to the true OOD distribution by utilizing the OOD knowledge in the predicted OOD samples during inference. Further, to obtain a more reliable set of predicted OOD samples on long-tailed ID data, a novel dual-normalized energy loss is introduced in AdaptOD, which leverages class- and sample-wise normalized energy to enforce a more balanced prediction energy on imbalanced ID samples. This helps avoid bias toward the head samples and learn a substantially better vanilla outlier distribution than existing energy losses during training. It also eliminates the need of manually tuning the sensitive margin hyperparameters in energy losses. Empirical results on three popular benchmarks for OOD detection in LTR show the superior performance of AdaptOD over state-of-the-art methods. Code is available at https://github.com/mala-lab/AdaptOD.