Symmetric Graph Convolutional Autoencoder for Unsupervised Graph Representation Learning
Park, Jiwoong, Lee, Minsik, Chang, Hyung Jin, Lee, Kyuewang, Choi, Jin Young
In contrast to the existing graph au-toencoders with asymmetric decoder parts, the proposed autoencoder has a newly designed decoder which builds a completely symmetric autoencoder form. F or the reconstruction of node features, the decoder is designed based on Laplacian sharpening as the counterpart of Laplacian smoothing of the encoder, which allows utilizing the graph structure in the whole processes of the proposed autoen-coder architecture. In order to prevent the numerical instability of the network caused by the Laplacian sharpening introduction, we further propose a new numerically stable form of the Laplacian sharpening by incorporating the signed graphs. In addition, a new cost function which finds a latent representation and a latent affinity matrix simultaneously is devised to boost the performance of image clustering tasks. The experimental results on clustering, link prediction and visualization tasks strongly support that the proposed model is stable and outperforms various state-of-the-art algorithms. 1. Introduction A graph, which consists of a set of nodes and edges, is a powerful tool to seek the geometric structure of data. There are various applications using graphs in the machine learning and data mining fields such as node clustering [26], dimensionality reduction [1], social network analysis [15], chemical property prediction of a molecular graph [7], and image segmentation [30]. However, conventional methods for analyzing a graph have several problems such as low computational efficiency due to eigendecomposition or singular value decomposition, or only showing a shallow relationship between nodes. In recent years, an emerging field called geometric deep learning [2], generalizes deep neural network models to (a) VGAE [13] (b) MGAE [35] (c) Proposed autoencoder Figure 1: Architectures of existing graph convolutional au-toencoders and proposed one. A, X, H and W denote the affinity matrix (structure of graph), node attributes, latent representations and the learnable weight of network respectively.
Aug-7-2019