Extracting class activation maps from non-discriminative features as well
Extracting class activation maps (CAM) from a classification model often results in poor coverage on foreground objects, i.e., only the discriminative region (e.g., the “head” of “sheep”) is recognized and the rest (e.g., the “leg” of “sheep”) mistakenly as background. The crux behind is that the we...
Saved in:
Main Authors: | , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8056 https://ink.library.smu.edu.sg/context/sis_research/article/9059/viewcontent/Chen_Extracting_Class_Activation_Maps_From_Non_Discriminative_Features_As_Well_CVPR_2023_paper.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-9059 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-90592023-09-07T08:07:23Z Extracting class activation maps from non-discriminative features as well CHEN, Zhaozheng SUN, Qianru Extracting class activation maps (CAM) from a classification model often results in poor coverage on foreground objects, i.e., only the discriminative region (e.g., the “head” of “sheep”) is recognized and the rest (e.g., the “leg” of “sheep”) mistakenly as background. The crux behind is that the weight of the classifier (used to compute CAM) captures only the discriminative features of objects. We tackle this by introducing a new computation method for CAM that explicitly captures non-discriminative features as well, thereby expanding CAM to cover whole objects. Specifically, we omit the last pooling layer of the classification model, and perform clustering on all local features of an object class, where “local” means “at a spatial pixel position”. We call the resultant K cluster centers local prototypes - represent local semantics like the “head”, “leg”, and “body” of “sheep”. Given a new image of the class, we compare its unpooled features to every prototype, derive K similarity matrices, and then aggregate them into a heatmap (i.e., our CAM). Our CAM thus captures all local features of the class without discrimination. We evaluate it in the challenging tasks of weakly-supervised semantic segmentation (WSSS), and plug it in multiple state-of-the-art WSSS methods, such as MCTformer and AMN, by simply replacing their original CAM with ours. Our extensive experiments on standard WSSS benchmarks (PASCAL VOC and MS COCO) show the superiority of our method: consistent improvements with little computational overhead. 2023-06-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8056 https://ink.library.smu.edu.sg/context/sis_research/article/9059/viewcontent/Chen_Extracting_Class_Activation_Maps_From_Non_Discriminative_Features_As_Well_CVPR_2023_paper.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Graphics and Human Computer Interfaces |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Graphics and Human Computer Interfaces |
spellingShingle |
Graphics and Human Computer Interfaces CHEN, Zhaozheng SUN, Qianru Extracting class activation maps from non-discriminative features as well |
description |
Extracting class activation maps (CAM) from a classification model often results in poor coverage on foreground objects, i.e., only the discriminative region (e.g., the “head” of “sheep”) is recognized and the rest (e.g., the “leg” of “sheep”) mistakenly as background. The crux behind is that the weight of the classifier (used to compute CAM) captures only the discriminative features of objects. We tackle this by introducing a new computation method for CAM that explicitly captures non-discriminative features as well, thereby expanding CAM to cover whole objects. Specifically, we omit the last pooling layer of the classification model, and perform clustering on all local features of an object class, where “local” means “at a spatial pixel position”. We call the resultant K cluster centers local prototypes - represent local semantics like the “head”, “leg”, and “body” of “sheep”. Given a new image of the class, we compare its unpooled features to every prototype, derive K similarity matrices, and then aggregate them into a heatmap (i.e., our CAM). Our CAM thus captures all local features of the class without discrimination. We evaluate it in the challenging tasks of weakly-supervised semantic segmentation (WSSS), and plug it in multiple state-of-the-art WSSS methods, such as MCTformer and AMN, by simply replacing their original CAM with ours. Our extensive experiments on standard WSSS benchmarks (PASCAL VOC and MS COCO) show the superiority of our method: consistent improvements with little computational overhead. |
format |
text |
author |
CHEN, Zhaozheng SUN, Qianru |
author_facet |
CHEN, Zhaozheng SUN, Qianru |
author_sort |
CHEN, Zhaozheng |
title |
Extracting class activation maps from non-discriminative features as well |
title_short |
Extracting class activation maps from non-discriminative features as well |
title_full |
Extracting class activation maps from non-discriminative features as well |
title_fullStr |
Extracting class activation maps from non-discriminative features as well |
title_full_unstemmed |
Extracting class activation maps from non-discriminative features as well |
title_sort |
extracting class activation maps from non-discriminative features as well |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2023 |
url |
https://ink.library.smu.edu.sg/sis_research/8056 https://ink.library.smu.edu.sg/context/sis_research/article/9059/viewcontent/Chen_Extracting_Class_Activation_Maps_From_Non_Discriminative_Features_As_Well_CVPR_2023_paper.pdf |
_version_ |
1779157092624498688 |