Wang, Qingyun
Stage-wise Fine-tuning for Graph-to-Text Generation
Wang, Qingyun, Yavuz, Semih, Lin, Victoria, Ji, Heng, Rajani, Nazneen
Graph-to-text generation has benefited from pre-trained language models (PLMs) in achieving better performance than structured graph encoders. However, they fail to fully utilize the structure information of the input graph. In this paper, we aim to further improve the performance of the pre-trained language model by proposing a structured graph-to-text model with a two-step fine-tuning mechanism which first fine-tunes the model on Wikipedia before adapting to the graph-to-text generation. In addition to using the traditional token and position embeddings to encode the knowledge graph (KG), we propose a novel tree-level embedding method to capture the inter-dependency structures of the input graph. This new approach has significantly improved the performance of all text generation metrics for the English WebNLG 2017 dataset.
ReviewRobot: Explainable Paper Review Generation based on Knowledge Synthesis
Wang, Qingyun, Zeng, Qi, Huang, Lifu, Knight, Kevin, Ji, Heng, Rajani, Nazneen Fatema
To assist human review process, we build a novel ReviewRobot to automatically assign a review score and write comments for multiple categories such as novelty and meaningful comparison. A good review needs to be knowledgeable, namely that the comments should be constructive and informative to help improve the paper; and explainable by providing detailed evidence. ReviewRobot achieves these goals via three steps: (1) We perform domain-specific Information Extraction to construct a knowledge graph (KG) from the target paper under review, a related work KG from the papers cited by the target paper, and a background KG from a large collection of previous papers in the domain. (2) By comparing these three KGs, we predict a review score and detailed structured knowledge as evidence for each review category. (3) We carefully select and generalize human review sentences into templates, and apply these templates to transform the review scores and evidence into natural language comments. Experimental results show that our review score predictor reaches 71.4%-100% accuracy. Human assessment by domain experts shows that 41.7%-70.5% of the comments generated by ReviewRobot are valid and constructive, and better than human-written ones for 20% of the time. Thus, ReviewRobot can serve as an assistant for paper reviewers, program chairs and authors.
A Survey of Knowledge-Enhanced Text Generation
Yu, Wenhao, Zhu, Chenguang, Li, Zaitang, Hu, Zhiting, Wang, Qingyun, Ji, Heng, Jiang, Meng
The goal of text generation is to make machines express in human language. It is one of the most important yet challenging tasks in natural language processing (NLP). Since 2014, various neural encoder-decoder models pioneered by Seq2Seq have been proposed to achieve the goal by learning to map input text to output text. However, the input text alone often provides limited knowledge to generate the desired output, so the performance of text generation is still far from satisfaction in many real-world scenarios. To address this issue, researchers have considered incorporating various forms of knowledge beyond the input text into the generation models. This research direction is known as knowledge-enhanced text generation. In this survey, we present a comprehensive review of the research on knowledge enhanced text generation over the past five years. The main content includes two parts: (i) general methods and architectures for integrating knowledge into text generation; (ii) specific techniques and applications according to different forms of knowledge data. This survey can have broad audiences, researchers and practitioners, in academia and industry.
COVID-19 Literature Knowledge Graph Construction and Drug Repurposing Report Generation
Wang, Qingyun, Li, Manling, Wang, Xuan, Parulian, Nikolaus, Han, Guangxing, Ma, Jiawei, Tu, Jingxuan, Lin, Ying, Zhang, Haoran, Liu, Weili, Chauhan, Aabhas, Guan, Yingjun, Li, Bangzheng, Li, Ruisong, Song, Xiangchen, Ji, Heng, Han, Jiawei, Chang, Shih-Fu, Pustejovsky, James, Rah, Jasmine, Liem, David, Elsayed, Ahmed, Palmer, Martha, Voss, Clare, Schneider, Cynthia, Onyshkevych, Boyan
To combat COVID-19, both clinicians and scientists need to digest the vast amount of relevant biomedical knowledge in literature to understand the disease mechanism and the related biological functions. We have developed a novel and comprehensive knowledge discovery framework, \textbf{COVID-KG} to extract fine-grained multimedia knowledge elements (entities, relations and events) from scientific literature. We then exploit the constructed multimedia knowledge graphs (KGs) for question answering and report generation, using drug repurposing as a case study. Our framework also provides detailed contextual sentences, subfigures and knowledge subgraphs as evidence. All of the data, KGs, reports, resources and shared services are publicly available.
PaperRobot: Incremental Draft Generation of Scientific Ideas
Wang, Qingyun, Huang, Lifu, Jiang, Zhiying, Knight, Kevin, Ji, Heng, Bansal, Mohit, Luan, Yi
We present a PaperRobot who performs as an automatic research assistant by (1) conducting deep understanding of a large collection of human-written papers in a target domain and constructing comprehensive background knowledge graphs (KGs); (2) creating new ideas by predicting links from the background KGs, by combining graph attention and contextual text attention; (3) incrementally writing some key elements of a new paper based on memory-attention networks: from the input title along with predicted related entities to generate a paper abstract, from the abstract to generate conclusion and future work, and finally from future work to generate a title for a follow-on paper. Turing Tests, where a biomedical domain expert is asked to compare a system output and a human-authored string, show PaperRobot generated abstracts, conclusion and future work sections, and new titles are chosen over human-written ones up to 30%, 24% and 12% of the time, respectively.
Paper Abstract Writing through Editing Mechanism
Wang, Qingyun, Zhou, Zhihao, Huang, Lifu, Whitehead, Spencer, Zhang, Boliang, Ji, Heng, Knight, Kevin
We present a paper abstract writing system based on an attentive neural sequence-to-sequence model that can take a title as input and automatically generate an abstract. We design a novel Writing-editing Network that can attend to both the title and the previously generated abstract drafts and then iteratively revise and polish the abstract. With two series of Turing tests, where the human judges are asked to distinguish the system-generated abstracts from human-written ones, our system passes Turing tests by junior domain experts at a rate up to 30% and by non-expert at a rate up to 80%.