A Dynamical Systems-Inspired Pruning Strategy for Addressing Oversmoothing in Graph Neural Networks

Chakraborty, Biswadeep, Kumar, Harshit, Mukhopadhyay, Saibal

arXiv.org Artificial Intelligence 

Graph Neural Networks (GNNs) Wu et al. [2020] have emerged as an important component in contemporary machine learning, excelling in tasks that require the analysis of graph-structured data. Their capacity to model complex relationships between nodes and edges has driven their widespread application in fields ranging from molecular property prediction Gilmer et al. [2017], Reiser et al. [2022], Gasteiger et al. [2021] to social network analysis Kipf and Welling [2017], Fan et al. [2019] and recommendation systems Ying et al. [2018]. However, one significant challenge that GNNs face is the phenomenon known as oversmoothing. As the depth of the GNN increases, node representations tend to homogenize, leading to a decline in the network's ability to differentiate between nodes, ultimately impairing performance Li et al. [2018]. Oversmoothing in GNNs has been extensively studied, with early works such as Li et al. [2018] identifying it as a critical issue in deep architectures like Graph Convolutional Networks (GCNs). Subsequent theoretical analyses Oono and Suzuki [2020], Cai and Wang [2020], Keriven [2022], Chen et al. [2020], Xu et al. [2019] have confirmed that oversmoothing is a fundamental problem in message-passing architectures, where repeated aggregation leads to the homogenization of node features. To counteract this, various strategies have been proposed, such as residual connections and skip connections Li et al. [2019], Xu et al. [2018], normalization methods Ba et al. [2016], Ioffe and Szegedy [2015], Zhou et al. [2020], and attention mechanisms Velickovic et al. [2018]. However, these approaches primarily involve architectural modifications that do not fundamentally address the propagation dynamics responsible for oversmoothing.