Goto

Collaborating Authors

 hsrl


Learning Topological Representation for Networks via Hierarchical Sampling

Fu, Guoji, Hou, Chengbin, Yao, Xin

arXiv.org Machine Learning

Abstract--The topological information is essential for studying the relationship between nodes in a network. Recently, Network Representation Learning (NRL), which projects a network into a low-dimensional vector space, has been shown their advantages inanalyzing large-scale networks. However, most existing NRL methods are designed to preserve the local topology of a network, they fail to capture the global topology. To tackle this issue, we propose a new NRL framework, named HSRL, to help existing NRL methods capture both the local and global topological information of a network. Then, an existing NRL method is used to learn node embeddings for each compressed network. Finally, the node embeddings of the input network are obtained by concatenating the node embeddings from all compressed networks. Empirical studies for link prediction on five real-world datasets demonstrate the advantages of HSRL over state-of-the-art methods. I. INTRODUCTION The science of networks has been widely used to understand thebehaviours of complex systems.


Hierarchically Structured Reinforcement Learning for Topically Coherent Visual Story Generation

Huang, Qiuyuan, Gan, Zhe, Celikyilmaz, Asli, Wu, Dapeng, Wang, Jianfeng, He, Xiaodong

arXiv.org Artificial Intelligence

We propose a hierarchically structured reinforcement learning approach to address the challenges of planning for generating coherentmulti-sentence stories for the visual storytelling task. Within our framework, the task of generating a story given a sequence of images is divided across a two-level hierarchical decoder.The high-level decoder constructs a plan by generating a semantic concept (i.e., topic) for each image in sequence. The low-level decoder generates a sentence for each image using a semantic compositional network, which effectively grounds the sentence generation conditioned on the topic. The two decoders are jointly trained end-to-end using reinforcement learning. We evaluate our model on the visual storytelling (VIST) dataset. Empirical results from both automatic and human evaluations demonstrate that the proposed hierarchicallystructured reinforced training achieves significantly better performance compared to a strong flat deep reinforcement learning baseline.