On the use of XAI for CNN model interpretation: a remote sensing case study

In this paper, we investigate the use of Explainable Artificial Intelligence (XAI) methods for the interpretation of two Convolutional Neural Network (CNN) classifiers in the field of remote sensing (RS). Specifically, the SegNet and Unet architectures for RS building information extraction and segm...

Full description

Saved in:
Bibliographic Details
Main Authors: Moradi, Loghman, Kalantar, Bahareh, Zaryabi, Erfan Hasanpour, Abdul Halin, Alfian, Ueda, Naonori
Format: Conference or Workshop Item
Published: IEEE 2022
Online Access:http://psasir.upm.edu.my/id/eprint/37761/
https://ieeexplore.ieee.org/document/10089337
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Universiti Putra Malaysia
Description
Summary:In this paper, we investigate the use of Explainable Artificial Intelligence (XAI) methods for the interpretation of two Convolutional Neural Network (CNN) classifiers in the field of remote sensing (RS). Specifically, the SegNet and Unet architectures for RS building information extraction and segmentation are evaluated using a comprehensive array of primary- and layer-attributions XAI methods. The attribution methods are quantitatively evaluated using the sensitivity metric. Based on the visualization of the different XAI methods, Deconvolution and GradCAM results in many of the study areas show reliability. Moreover, these methods are able to accurately interpret both Unet's and SegNet's decisions and managed to analyze and reveal the internal mechanisms in both models (confirmed by the low sensitivity scores). Overall, no single method stood out as the best one.