## Details

Originalsprache | Englisch |
---|---|

Seiten (von - bis) | 8687-8698 |

Seitenumfang | 12 |

Fachzeitschrift | IEEE Transactions on Knowledge and Data Engineering |

Jahrgang | 35 |

Ausgabenummer | 8 |

Publikationsstatus | Veröffentlicht - 24 Aug. 2022 |

## Abstract

With the ever-increasing popularity and applications of graph neural networks, several proposals have been made to explain and understand the decisions of a graph neural network. Explanations for graph neural networks differ in principle from other input settings. It is important to attribute the decision to input features and other related instances connected by the graph structure. We find that the previous explanation generation approaches that maximize the mutual information between the label distribution produced by the model and the explanation to be restrictive. Specifically, existing approaches do not enforce explanations to be valid, sparse, or robust to input perturbations. In this paper, we lay down some of the fundamental principles that an explanation method for graph neural networks should follow and introduce a metric <italic>RDT-Fidelity</italic> as a measure of the explanation's effectiveness. We propose a novel approach Zorro based on the principles from <italic>rate-distortion theory</italic> that uses a simple combinatorial procedure to optimize for RDT-Fidelity. Extensive experiments on real and synthetic datasets reveal that Zorro produces sparser, stable, and more faithful explanations than existing graph neural network explanation approaches.

## ASJC Scopus Sachgebiete

- Informatik (insg.)
**Information systems**- Informatik (insg.)
**Angewandte Informatik**- Informatik (insg.)
**Theoretische Informatik und Mathematik**

## Zitieren

- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS

**ZORRO: Valid, Sparse, and Stable Explanations in Graph Neural Networks.**/ Funke, Thorben; Khosla, Megha; Rathee, Mandeep et al.

in: IEEE Transactions on Knowledge and Data Engineering, Jahrgang 35, Nr. 8, 24.08.2022, S. 8687-8698.

Publikation: Beitrag in Fachzeitschrift › Artikel › Forschung › Peer-Review

*IEEE Transactions on Knowledge and Data Engineering*, Jg. 35, Nr. 8, S. 8687-8698. https://doi.org/10.48550/arXiv.2105.08621, https://doi.org/10.1109/TKDE.2022.3201170

*IEEE Transactions on Knowledge and Data Engineering*,

*35*(8), 8687-8698. https://doi.org/10.48550/arXiv.2105.08621, https://doi.org/10.1109/TKDE.2022.3201170

}

TY - JOUR

T1 - ZORRO

T2 - Valid, Sparse, and Stable Explanations in Graph Neural Networks

AU - Funke, Thorben

AU - Khosla, Megha

AU - Rathee, Mandeep

AU - Anand, Avishek

N1 - Funding Information: This work was partially supported in part by the project "CampaNeo" under Grant ID 01MD19007

PY - 2022/8/24

Y1 - 2022/8/24

N2 - With the ever-increasing popularity and applications of graph neural networks, several proposals have been made to explain and understand the decisions of a graph neural network. Explanations for graph neural networks differ in principle from other input settings. It is important to attribute the decision to input features and other related instances connected by the graph structure. We find that the previous explanation generation approaches that maximize the mutual information between the label distribution produced by the model and the explanation to be restrictive. Specifically, existing approaches do not enforce explanations to be valid, sparse, or robust to input perturbations. In this paper, we lay down some of the fundamental principles that an explanation method for graph neural networks should follow and introduce a metric RDT-Fidelity as a measure of the explanation's effectiveness. We propose a novel approach Zorro based on the principles from rate-distortion theory that uses a simple combinatorial procedure to optimize for RDT-Fidelity. Extensive experiments on real and synthetic datasets reveal that Zorro produces sparser, stable, and more faithful explanations than existing graph neural network explanation approaches.

AB - With the ever-increasing popularity and applications of graph neural networks, several proposals have been made to explain and understand the decisions of a graph neural network. Explanations for graph neural networks differ in principle from other input settings. It is important to attribute the decision to input features and other related instances connected by the graph structure. We find that the previous explanation generation approaches that maximize the mutual information between the label distribution produced by the model and the explanation to be restrictive. Specifically, existing approaches do not enforce explanations to be valid, sparse, or robust to input perturbations. In this paper, we lay down some of the fundamental principles that an explanation method for graph neural networks should follow and introduce a metric RDT-Fidelity as a measure of the explanation's effectiveness. We propose a novel approach Zorro based on the principles from rate-distortion theory that uses a simple combinatorial procedure to optimize for RDT-Fidelity. Extensive experiments on real and synthetic datasets reveal that Zorro produces sparser, stable, and more faithful explanations than existing graph neural network explanation approaches.

KW - Computational modeling

KW - Data models

KW - Explainability

KW - Feature extraction

KW - Graph Neural Networks

KW - Graph neural networks

KW - Interpretability

KW - Rate-distortion

KW - Stability analysis

KW - Task analysis

KW - graph neural networks

KW - interpretability

UR - http://www.scopus.com/inward/record.url?scp=85137574213&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2105.08621

DO - 10.48550/arXiv.2105.08621

M3 - Article

AN - SCOPUS:85137574213

VL - 35

SP - 8687

EP - 8698

JO - IEEE Transactions on Knowledge and Data Engineering

JF - IEEE Transactions on Knowledge and Data Engineering

SN - 1041-4347

IS - 8

ER -