Biologically Plausible Brain Graph Transformer

Peng, Ciyuan, Huang, Yuelong, Dong, Qichao, Yu, Shuo, Xia, Feng, Zhang, Chengqi, Jin, Yaochu

arXiv.org Artificial Intelligence 

State-of-the-art brain graph analysis methods fail to fully encode the small-world architecture of brain graphs (accompanied by the presence of hubs and functional modules), and therefore lack biological plausibility to some extent. This limitation hinders their ability to accurately represent the brain's structural and functional properties, thereby restricting the effectiveness of machine learning models in tasks such as brain disorder detection. In this work, we propose a novel Biologically Plausible Brain Graph Transformer (BioBGT) that encodes the small-world architecture inherent in brain graphs. Specifically, we present a network entanglement-based node importance encoding technique that captures the structural importance of nodes in global information propagation during brain graph communication, highlighting the biological properties of the brain structure. Furthermore, we introduce a functional module-aware self-attention to preserve the functional segregation and integration characteristics of brain graphs in the learned representations. Hub2 (a) Hubs play essential roles (b) Functional modules in the brain. One Figure 1: Small-world architecture of brain graphs. of the most important characteristics of brain graphs is their small-world architecture, with scientific evidence supporting the presence of hubs and functional modules in brain graphs (Liao et al., 2017; Swanson et al., 2024). First, it is demonstrated that nodes in brain graphs exhibit a high degree of difference in their importance, with certain nodes having more central roles in information propagation (Lynn & Bassett, 2019; Betzel et al., 2024). These nodes are perceived as hubs, as shown in Figure 1 (a) (the visualization is based on findings by Seguin et al. (2023)), which are usually highly connected so as to support efficient communication within the brain. Second, human brain consists of various functional modules (e.g., visual cortex), where ROIs within the same module exhibit high functional coherence, termed functional integration, while ROIs from different modules show lower functional coherence, termed functional segregation (Rubinov & Sporns, 2010; Seguin et al., 2022). Therefore, brain graphs are characterized by community structure, reflecting functional modules. Our code is available at https://github.com/pcyyyy/BioBGT. ROIs in the same module have strong connections (high temporal correlations), while those from different modules show weaker connections. With the significant ability of graph transformers in capturing interactions between nodes (Ma et al., 2023a; Shehzad et al., 2024; Yi et al., 2024), Transformer-based brain graph learning methods have gained prominence (Kan et al., 2022; Bannadabhavi et al., 2023).