LightHGNN: Distilling Hypergraph Neural Networks into MLPs for $100\times$ Faster Inference

Feng, Yifan, Luo, Yihe, Ying, Shihui, Gao, Yue

arXiv.org Artificial Intelligence 

Hypergraph Neural Networks (HGNNs) have recently attracted much attention and exhibited satisfactory performance due to their superiority in high-order correlation modeling. However, it is noticed that the high-order modeling capability of hypergraph also brings increased computation complexity, which hinders its practical industrial deployment. In practice, we find that one key barrier to the efficient deployment of HGNNs is the high-order structural dependencies during inference. In this paper, we propose to bridge the gap between the HGNNs and inference-efficient Multi-Layer Perceptron (MLPs) to eliminate the hypergraph dependency of HGNNs and thus reduce computational complexity as well as improve inference speed. Experiments on eight hypergraph datasets demonstrate that even without hypergraph dependency, the proposed LightHGNNs can still achieve competitive or even better performance than HGNNs and outperform vanilla MLPs by 16.3 on average. Extensive experiments on three graph datasets further show the average best performance of our LightHGNNs compared with all other methods. Experiments on synthetic hypergraphs with 5.5w vertices indicate LightHGNNs can run 100 faster than HGNNs, showcasing their ability for latency-sensitive deployments. Compared to the graph with pair-wise correlation, the hypergraph is composed of degree-free hyperedges, which have an inherent superior modeling ability to represent those more complex high-order correlations. However, for large-scale industrial applications, especially for those big-data, small-memory, and high-speed demand environments, the Multi-Layer Perceptrons (MLPs) remain the primary workhorse. The main reason for such an academic-industrial gap for HGNNs is the dependence on the hypergraph structure in inference, which requires large memories in practice.