Plotting

 Chang, Baobao


Double Graph Based Reasoning for Document-level Relation Extraction

arXiv.org Artificial Intelligence

Document-level relation extraction aims to extract relations among entities within a document. Different from sentence-level relation extraction, it requires reasoning over multiple sentences across a document. In this paper, we propose Graph Aggregation-and-Inference Network (GAIN) featuring double graphs. GAIN first constructs a heterogeneous mention-level graph (hMG) to model complex interaction among different mentions across the document. It also constructs an entity-level graph (EG), based on which we propose a novel path reasoning mechanism to infer relations between entities. Experiments on the public dataset, DocRED, show GAIN achieves a significant performance improvement (2.85 on F1) over the previous state-of-the-art. Our code is available at https://github.com/DreamInvoker/GAIN .


Table-to-Text Generation by Structure-Aware Seq2seq Learning

AAAI Conferences

Table-to-text generation aims to generate a description for a factual table which can be viewed as a set of field-value records. To encode both the content and the structure of a table, we propose a novel structure-aware seq2seq architecture which consists of field-gating encoder and description generator with dual attention. In the encoding phase, we update the cell memory of the LSTM unit by a field gate and its corresponding field value in order to incorporate field information into table representation. In the decoding phase, dual attention mechanism which contains word level attention and field level attention is proposed to model the semantic relevance between the generated description and the table. We conduct experiments on the WIKIBIO dataset which contains over 700k biographies and corresponding infoboxes from Wikipedia. The attention visualizations and case studies show that our model is capable of generating coherent and informative descriptions based on the comprehensive understanding of both the content and the structure of a table. Automatic evaluations also show our model outperforms the baselines by a great margin. Code for this work is available on https://github.com/tyliupku/wiki2bio.


Jointly Extracting Event Triggers and Arguments by Dependency-Bridge RNN and Tensor-Based Argument Interaction

AAAI Conferences

Event extraction plays an important role in natural language processing (NLP) applications including question answering and information retrieval. Traditional event extraction relies heavily on lexical and syntactic features, which require intensive human engineering and may not generalize to different datasets. Deep neural networks, on the other hand, are able to automatically learn underlying features, but existing networks do not make full use of syntactic relations. In this paper, we propose a novel dependency bridge recurrent neural network (dbRNN) for event extraction. We build our model upon a recurrent neural network, but enhance it with dependency bridges, which carry syntactically related information when modeling each word.We illustrates that simultaneously applying tree structure and sequence structure in RNN brings much better performance than only uses sequential RNN. In addition, we use a tensor layer to simultaneously capture the various types of latent interaction between candidate arguments as well as identify/classify all arguments of an event. Experiments show that our approach achieves competitive results compared with previous work.


A Multi-View Fusion Neural Network for Answer Selection

AAAI Conferences

Community question answering aims at choosing the most appropriate answer for a given question, which is important in many NLP applications. Previous neural network-based methods consider several different aspects of information through calculating attentions. These different kinds of attentions are always simply summed up and can be seen as a ``single view", causing severe information loss. To overcome this problem, we propose a Multi-View Fusion Neural Network, where each attention component generates a ``view'' of the QA pair and a fusion RNN integrates the generated views to form a more holistic representation.    In this fusion RNN method, a filter gate  collects  important information of  input and directly adds it to the output, which borrows the idea of residual networks.    Experimental results on the WikiQA and SemEval-2016 CQA datasets demonstrate that our proposed model outperforms the state-of-the-art methods.


An Ontology Matching Approach Based on Affinity-Preserving Random Walks

AAAI Conferences

Ontology matching is the process of finding semantic correspondences between entities from different ontologies. As an effective solution to linking different heterogeneous ontologies, ontology matching has attracted considerable attentions in recent years. In this paper, we propose a novel graph-based approach to ontology matching problem. Different from previous work, we formulate ontology matching as a random walk process on the association graph constructed from the to-be-matched ontologies. In particular, two variants of the conventional random walk process, namely, Affinity-Preserving Random Walk (APRW) and Mapping-Oriented Random Walk (MORW), have been proposed to alleviate the adverse effect of the false-mapping nodes in the association graph and to incorporate the 1-to-1 matching constraints presumed in ontology matching, respectively. Experiments on the Ontology Alignment Evaluation Initiative (OAEI) datasets show that our approach achieves a competitive performance when compared with state-of-the-art systems, even though our approach does not utilize any external resources.