Graph Sparsification via Mixture of Graphs
File version
Accepted Manuscript (AM)
Author(s)
Sun, X
Yue, Y
Jiang, C
Wang, K
Chen, T
Pan, S
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
Singapore
Abstract
Graph Neural Networks (GNNs) have demonstrated superior performance across various graph learning tasks but face significant computational challenges when applied to large-scale graphs. One effective approach to mitigate these challenges is graph sparsification, which involves removing non-essential edges to reduce computational overhead. However, previous graph sparsification methods often rely on a single global sparsity setting and uniform pruning criteria, failing to provide customized sparsification schemes for each node's complex local context. In this paper, we introduce Mixture-of-Graphs (MoG), leveraging the concept of Mixture-of-Experts (MoE), to dynamically select tailored pruning solutions for each node. Specifically, MoG incorporates multiple sparsifier experts, each characterized by unique sparsity levels and pruning criteria, and selects the appropriate experts for each node. Subsequently, MoG performs a mixture of the sparse graphs produced by different experts on the Grassmann manifold to derive an optimal sparse graph. One notable property of MoG is its entirely local nature, as it depends on the specific circumstances of each individual node. Extensive experiments on four large-scale OGB datasets and two superpixel datasets, equipped with five GNN backbones, demonstrate that MoG (I) identifies subgraphs at higher sparsity levels (8.67% ∼ 50.85%), with performance equal to or better than the dense graph, (II) achieves 1.47-2.62× speedup in GNN inference with negligible performance drop, and (III) boosts “top-student” GNN performance (1.02% ↑ on RevGNN+OGBNPROTEINS and 1.74% ↑ on DeeperGCN+OGBG-PPA). The source code is available at https://github.com/yanweiyue/MoG.
Journal Title
Conference Title
13th International Conference on Learning Representations (ICLR 2025)
Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
DOI
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
This resource is distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Item Access Status
Note
Access the data
Related item(s)
Subject
Persistent link to this record
Citation
Zhang, G; Sun, X; Yue, Y; Jiang, C; Wang, K; Chen, T; Pan, S, Graph Sparsification via Mixture of Graphs, 13th International Conference on Learning Representations (ICLR 2025), 2025, pp. 12468-12496