Goto

Collaborating Authors

 Todoriki, Masaru


Bermuda Triangles: GNNs Fail to Detect Simple Topological Structures

arXiv.org Artificial Intelligence

Most graph neural network architectures work by message-passing node vector embeddings over the adjacency matrix, and it is assumed that they capture graph topology by doing that. We design two synthetic tasks, focusing purely on topological problems - triangle detection and clique distance - on which graph neural networks perform surprisingly badly, failing to detect those "bermuda" triangles. Many tasks need to handle the graph representation of data in areas such as chemistry (Wale & Karypis, Method Triangles Clique 2006), social networks (Fan et al., 2019), and transportation GCN 50.0 50.0 (Zhao et al., 2019). Furthermore, it is not GCN D 75.7 83.2 limited to these graph tasks but also includes images GCN D ID 80.4 83.4 (Chen et al., 2019) and 3D polygons (Shi & Rajkumar, GIN 74.1 97 2020) that are possible to convert to graph data GIN D 75.0 99.4 formats. Because of these broad applications, Graph GIN D ID 70.5 100.0 Deep Learning is an important field in machine learning GAT 50.0 50.0 research. GAT D 88.5 99.9 Graph neural networks (GNNs, (Scarselli et al., 2008)) GAT D ID 94.1 100.0 is a common approach to perform machine learning SVM WL 67.2 73.1 with graphs. Most graph neural networks update SVM Graphlets 99.6 60.3 the graph node vector embeddings using the message passing. Node vector embeddings are usually initialized FCNN 55.6 54.6 with data features and local graph features like TF 100.0 70.0 node degrees. Then, for a (n 1)-th stacked layer, the TF AM 100.0 100.0 new node state is computed from the node vector representation TF-IS AM 86.7 100.0 of the previous layer (n).


Learning Multi-Way Relations via Tensor Decomposition With Neural Networks

AAAI Conferences

How can we classify multi-way data such as network traffic logs with multi-way relations between source IPs, destination IPs, and ports? Multi-way data can be represented as a tensor, and there have been several studies on classification of tensors to date. One critical issue in the classification of multi-way relations is how to extract important features for classification when objects in different multi-way data, i.e., in different tensors, are not necessarily in correspondence. In such situations, we aim to extract features that do not depend on how we allocate indices to an object such as a specific source IP; we are interested in only the structures of the multi-way relations. However, this issue has not been considered in previous studies on classification of multi-way data. We propose a novel method which can learn and classify multi-way data using neural networks. Our method leverages a novel type of tensor decomposition that utilizes a target core tensor expressing the important features whose indices are independent of those of the multi-way data. The target core tensor guides the tensor decomposition into more effective results and is optimized in a supervised manner. Our experiments on three different domains show that our method is highly accurate, especially on higher order data. It also enables us to interpret the classification results along with the matrices calculated with the novel tensor decomposition.