Learning interpretable concept groups in CNNs
We propose a novel training methodology---Concept Group Learning (CGL)---that encourages training of interpretable CNN filters by partitioning filters in each layer into concept groups, each of which is trained to learn a single visual concept. We achieve this through a novel regularization strategy...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7206 https://ink.library.smu.edu.sg/context/sis_research/article/8209/viewcontent/Interpretable_CNNs.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-8209 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-82092022-08-26T07:22:12Z Learning interpretable concept groups in CNNs VARSHNEYA, Saurabh LEDENT, Antoine VANDERMEULEN, Rob LEI, Yunwen ENDERS, Matthias BORTH, Damian KLOFT, Marius We propose a novel training methodology---Concept Group Learning (CGL)---that encourages training of interpretable CNN filters by partitioning filters in each layer into concept groups, each of which is trained to learn a single visual concept. We achieve this through a novel regularization strategy that forces filters in the same group to be active in similar image regions for a given layer. We additionally use a regularizer to encourage a sparse weighting of the concept groups in each layer so that a few concept groups can have greater importance than others. We quantitatively evaluate CGL's model interpretability using standard interpretability evaluation techniques and find that our method increases interpretability scores in most cases. Qualitatively we compare the image regions which are most active under filters learned using CGL versus filters learned without CGL and find that CGL activation regions more strongly concentrate around semantically relevant features. 2021-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7206 info:doi/10.24963/ijcai.2021/147 https://ink.library.smu.edu.sg/context/sis_research/article/8209/viewcontent/Interpretable_CNNs.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Convolutional Neural Networks Interpretability Computer Vision. Artificial Intelligence and Robotics Graphics and Human Computer Interfaces |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Convolutional Neural Networks Interpretability Computer Vision. Artificial Intelligence and Robotics Graphics and Human Computer Interfaces |
spellingShingle |
Convolutional Neural Networks Interpretability Computer Vision. Artificial Intelligence and Robotics Graphics and Human Computer Interfaces VARSHNEYA, Saurabh LEDENT, Antoine VANDERMEULEN, Rob LEI, Yunwen ENDERS, Matthias BORTH, Damian KLOFT, Marius Learning interpretable concept groups in CNNs |
description |
We propose a novel training methodology---Concept Group Learning (CGL)---that encourages training of interpretable CNN filters by partitioning filters in each layer into concept groups, each of which is trained to learn a single visual concept. We achieve this through a novel regularization strategy that forces filters in the same group to be active in similar image regions for a given layer. We additionally use a regularizer to encourage a sparse weighting of the concept groups in each layer so that a few concept groups can have greater importance than others. We quantitatively evaluate CGL's model interpretability using standard interpretability evaluation techniques and find that our method increases interpretability scores in most cases. Qualitatively we compare the image regions which are most active under filters learned using CGL versus filters learned without CGL and find that CGL activation regions more strongly concentrate around semantically relevant features. |
format |
text |
author |
VARSHNEYA, Saurabh LEDENT, Antoine VANDERMEULEN, Rob LEI, Yunwen ENDERS, Matthias BORTH, Damian KLOFT, Marius |
author_facet |
VARSHNEYA, Saurabh LEDENT, Antoine VANDERMEULEN, Rob LEI, Yunwen ENDERS, Matthias BORTH, Damian KLOFT, Marius |
author_sort |
VARSHNEYA, Saurabh |
title |
Learning interpretable concept groups in CNNs |
title_short |
Learning interpretable concept groups in CNNs |
title_full |
Learning interpretable concept groups in CNNs |
title_fullStr |
Learning interpretable concept groups in CNNs |
title_full_unstemmed |
Learning interpretable concept groups in CNNs |
title_sort |
learning interpretable concept groups in cnns |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2021 |
url |
https://ink.library.smu.edu.sg/sis_research/7206 https://ink.library.smu.edu.sg/context/sis_research/article/8209/viewcontent/Interpretable_CNNs.pdf |
_version_ |
1770576269824491520 |