Large-scale graph representation learning with very deep GNNs and self-supervision
Addanki, Ravichandra, Battaglia, Peter W., Budden, David, Deac, Andreea, Godwin, Jonathan, Keck, Thomas, Li, Wai Lok Sibon, Sanchez-Gonzalez, Alvaro, Stott, Jacklynn, Thakoor, Shantanu, Veličković, Petar
–arXiv.org Artificial Intelligence
Effective high-dimensional representation learning necessitates properly exploiting the geometry of data [Bronstein et al., 2021]--otherwise, it is a cursed estimation problem. Indeed, early success stories of deep learning relied on imposing strong geometric assumptions, primarily that the data lives on a grid domain; either spatial or temporal. In these two respective settings, convolutional neural networks (CNNs) [LeCun et al., 1998] and recurrent neural networks (RNNs) [Hochreiter and Schmidhuber, 1997] have traditionally dominated. While both CNNs and RNNs are demonstrably powerful models, with many applications of high interest, it can be recognised that most data coming from nature cannot be natively represented on a grid. Recent years are marked with a gradual shift of attention towards models that admit a more generic class of geometric structures [Masci et al., 2015, Veličković et al., 2017, Cohen et al., 2018, Battaglia et al., 2018, de Haan et al., 2020, Satorras et al., 2021].
arXiv.org Artificial Intelligence
Jul-20-2021