Goto

Collaborating Authors

Variational Graph Recurrent Neural Networks

Neural Information Processing Systems

Representation learning over graph structured data has been mostly studied in static graph settings while efforts for modeling dynamic graphs are still scant. In this paper, we develop a novel hierarchical variational model that introduces additional latent random variables to jointly model the hidden states of a graph recurrent neural network (GRNN) to capture both topology and node attribute changes in dynamic graphs. We argue that the use of high-level latent random variables in this variational GRNN (VGRNN) can better capture potential variability observed in dynamic graphs as well as the uncertainty of node latent representation. With semi-implicit variational inference developed for this new VGRNN architecture (SI-VGRNN), we show that flexible non-Gaussian latent representations can further help dynamic graph analytic tasks. Our experiments with multiple real-world dynamic graph datasets demonstrate that SI-VGRNN and VGRNN consistently outperform the existing baseline and state-of-the-art methods by a significant margin in dynamic link prediction.


Semi-Implicit Stochastic Recurrent Neural Networks

arXiv.org Machine Learning

Stochastic recurrent neural networks with latent random variables of complex dependency structures have shown to be more successful in modeling sequential data than deterministic deep models. However, the majority of existing methods have limited expressive power due to the Gaussian assumption of latent variables. In this paper, we advocate learning implicit latent representations using semi-implicit variational inference to further increase model flexibility. Semi-implicit stochastic recurrent neural network(SIS-RNN) is developed to enrich inferred model posteriors that may have no analytic density functions, as long as independent random samples can be generated via reparameterization. Extensive experiments in different tasks on real-world datasets show that SIS-RNN outperforms the existing methods.


Dynamic Joint Variational Graph Autoencoders

arXiv.org Machine Learning

Learning network representations is a fundamental task for many graph applications such as link prediction, node classification, graph clustering, and graph visualization. Many real-world networks are interpreted as dynamic networks and evolve over time. Most existing graph embedding algorithms were developed for static graphs mainly and cannot capture the evolution of a large dynamic network. In this paper, we propose Dynamic joint Variational Graph Autoencoders (Dyn-VGAE) that can learn both local structures and temporal evolutionary patterns in a dynamic network. Dyn-VGAE provides a joint learning framework for computing temporal representations of all graph snapshots simultaneously. Each auto-encoder embeds a graph snapshot based on its local structure and can also learn temporal dependencies by collaborating with other autoencoders. We conduct experimental studies on dynamic real-world graph datasets and the results demonstrate the effectiveness of the proposed method.


Semi-Implicit Graph Variational Auto-Encoders

arXiv.org Machine Learning

Semi-implicit graph variational auto-encoder (SIG-VAE) is proposed to expand the flexibility of variational graph auto-encoders (VGAE) to model graph data. SIG-VAE employs a hierarchical variational framework to enable neighboring node sharing for better generative modeling of graph dependency structure, together with a Bernoulli-Poisson link decoder. Not only does this hierarchical construction provide a more flexible generative graph model to better capture real-world graph properties, but also does SIG-VAE naturally lead to semi-implicit hierarchical variational inference that allows faithful modeling of implicit posteriors of given graph data, which may exhibit heavy tails, multiple modes, skewness, and rich dependency structures. Compared to VGAE, the derived graph latent representations by SIG-VAE are more interpretable, due to more expressive generative model and more faithful inference enabled by the flexible semi-implicit construction. Extensive experiments with a variety of graph data show that SIG-VAE significantly outperforms state-of-the-art methods on several different graph analytic tasks.


dyngraph2vec: Capturing Network Dynamics using Dynamic Graph Representation Learning

arXiv.org Artificial Intelligence

Understanding and analyzing graphs is an essential topic that has been widely studied over the past decades. Many real world problems can be formulated as link predictions in graphs (Gehrke, Ginsparg, and Kleinberg 2003; Freeman 2000; Theocharidis et al. 2009; Goyal, Sapienza, and Ferrara 2018). For example, link prediction in an author collaboration network (Gehrke, Ginsparg, and Kleinberg 2003) can be used to predict potential future author collaboration. Similarly, new connections between proteins can be discovered using protein interaction networks (Pavlopoulos, Wegener, and Schneider 2008), and new friendships can be predicted using social networks (Wasserman and Faust 1994). Recent work on obtaining such predictions use graph representation learning. These methods represent each node in the network with a fixed dimensional embedding, and map link prediction in the network space to a nearest neighbor search in the embedding space (Goyal and Ferrara 2018). It has been shown that such techniques can outperform traditional link prediction methods on graphs (Grover and Leskovec 2016; Ou et al. 2016a).