Tree-Constrained Graph Neural Networks For Argument Mining

Ruggeri, Federico, Lippi, Marco, Torroni, Paolo

arXiv.org Artificial Intelligence 

Graph Neural Networks (GNNs) are currently a hot topic in artificial intelligence, with a huge amount of applications in many domains, ranging from bioinformatics to computer vision, from social network analysis to natural language processing [1]. First introduced in [2], and then far and wide extended with a large number of variants, GNNs can learn embedding representations of generic graphs, by exploiting aggregation functions based on propagation and pooling layers. These building blocks are frequently stacked into a deep network, and the resulting embeddings can be exploited in any high-level task. This kind of architecture has rapidly become the state-of-the-art, or at least a strong competitor, in many application domains dealing with structured data. Historically, in natural language processing (NLP) as well as in other domains, Tree Kernels (TKs) have long been one of the most widely employed technique to handle structured data in the form of trees [3]. A TK is basically a similarity function that captures the degree of similarity of two trees by looking at common fragments within their substructures.