Explainable graph classification with deep learning models
Graph Classification is a promising area of deep learning, but it has a significant drawback. We need to understand the reasons behind the model’s predicted label of an input graph to trust the prediction, but these reasons are not supplied by Graph Classification models. Hence, Graph Classification...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/148008 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Graph Classification is a promising area of deep learning, but it has a significant drawback. We need to understand the reasons behind the model’s predicted label of an input graph to trust the prediction, but these reasons are not supplied by Graph Classification models. Hence, Graph Classification Interpretability Methods were conceived. To analyse a new interpretability method, GNNExplainer, on a comparative basis with established methods in our main reference, saliency (also known as CG), GRAD-CAM and DeepLIFT, we develop a bridging algorithm and find the node attribution score of each node in a test graph. The scores of all the nodes in the test graph dataset are then used to produce quantitative metrics (fidelity, contrastivity and sparsity) for comparison. |
---|