Towards Self-Interpretable Graph-Level Anomaly Detection

Loading...
Thumbnail Image
File version

Version of Record (VoR)

Author(s)
Liu, Y
Ding, K
Lu, Q
Li, F
Zhang, LY
Pan, S
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)

Oh, A

Naumann, T

Globerson, A

Saenko, K

Hardt, M

Levine, S

Date
2023
Size
File type(s)
Location

New Orleans, USA

License
Abstract

Graph-level anomaly detection (GLAD) aims to identify graphs that exhibit notable dissimilarity compared to the majority in a collection. However, current works primarily focus on evaluating graph-level abnormality while failing to provide meaningful explanations for the predictions, which largely limits their reliability and application scope. In this paper, we investigate a new challenging problem, explainable GLAD, where the learning objective is to predict the abnormality of each graph sample with corresponding explanations, i.e., the vital subgraph that leads to the predictions. To address this challenging problem, we propose a Self-Interpretable Graph aNomaly dETection model (SIGNET for short) that detects anomalous graphs as well as generates informative explanations simultaneously. Specifically, we first introduce the multi-view subgraph information bottleneck (MSIB) framework, serving as the design basis of our self-interpretable GLAD approach. This way SIGNET is able to not only measure the abnormality of each graph based on cross-view mutual information but also provide informative graph rationales by extracting bottleneck subgraphs from the input graph and its dual hypergraph in a self-supervised way. Extensive experiments on 16 datasets demonstrate the anomaly detection capability and self-interpretability of SIGNET.

Journal Title
Conference Title

Advances in Neural Information Processing Systems 36 (NeurIPS 2023)

Book Title
Edition
Volume

36

Issue
Thesis Type
Degree Program
School
Publisher link
DOI
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement

This work is covered by copyright. You must assume that re-use is limited to personal use and that permission from the copyright owner must be obtained for all other uses. If the document is available under a specified licence, refer to the licence for details of permitted re-use. If you believe that this work infringes copyright please make a copyright takedown request using the form at https://www.griffith.edu.au/copyright-matters.

Item Access Status
Note
Access the data
Related item(s)
Subject

Artificial intelligence

Machine learning

Persistent link to this record
Citation

Liu, Y; Ding, K; Lu, Q; Li, F; Zhang, LY; Pan, S, Towards Self-Interpretable Graph-Level Anomaly Detection, Advances in Neural Information Processing Systems, 2023, 36