Goto

Collaborating Authors

 Gao, Shengxiang


Beyond Seen Data: Improving KBQA Generalization Through Schema-Guided Logical Form Generation

arXiv.org Artificial Intelligence

Knowledge base question answering (KBQA) aims to answer user questions in natural language using rich human knowledge stored in large KBs. As current KBQA methods struggle with unseen knowledge base elements at test time,we introduce SG-KBQA: a novel model that injects schema contexts into entity retrieval and logical form generation to tackle this issue. It uses the richer semantics and awareness of the knowledge base structure provided by schema contexts to enhance generalizability. We show that SG-KBQA achieves strong generalizability, outperforming state-of-the-art models on two commonly used benchmark datasets across a variety of test settings. Code will be released upon paper publication.


A Mixed-Language Multi-Document News Summarization Dataset and a Graphs-Based Extract-Generate Model

arXiv.org Artificial Intelligence

Existing research on news summarization primarily focuses on single-language singledocument (SLSD), single-language multidocument (SLMD) or cross-language singledocument (CLSD). However, in real-world scenarios, news about a international event often involves multiple documents in different languages, i.e., mixed-language multi-document (MLMD). Therefore, summarizing MLMD news is of great significance. However, the lack Figure 1: The diagram of SLSD, SLMD, CLSD and of datasets for MLMD news summarization has MLMD. Each rounded rectangle represents a source constrained the development of research in this document, while the pointed rectangle represents the area. To fill this gap, we construct a mixedlanguage target summary. "En" "De" "Fr" and "Es" indicate that multi-document news summarization the text is in English, German, French, and Spanish, dataset (MLMD-news), which contains four different respectively.


Multilingual Knowledge Graph Completion from Pretrained Language Models with Knowledge Constraints

arXiv.org Artificial Intelligence

Multilingual Knowledge Graph Completion (mKGC) aim at solving queries like (h, r, ?) in different languages by reasoning a tail entity t thus improving multilingual knowledge graphs. Previous studies leverage multilingual pretrained language models (PLMs) and the generative paradigm to achieve mKGC. Although multilingual pretrained language models contain extensive knowledge of different languages, its pretraining tasks cannot be directly aligned with the mKGC tasks. Moreover, the majority of KGs and PLMs currently available exhibit a pronounced English-centric bias. This makes it difficult for mKGC to achieve good results, particularly in the context of low-resource languages. To overcome previous problems, this paper introduces global and local knowledge constraints for mKGC. The former is used to constrain the reasoning of answer entities, while the latter is used to enhance the representation of query contexts. The proposed method makes the pretrained model better adapt to the mKGC task. Experimental results on public datasets demonstrate that our method outperforms the previous SOTA on Hits@1 and Hits@10 by an average of 12.32% and 16.03%, which indicates that our proposed method has significant enhancement on mKGC.