KGGen: Extracting Knowledge Graphs from Plain Text with Language Models

Mo, Belinda, Yu, Kyssen, Kazdan, Joshua, Mpala, Proud, Yu, Lisa, Cundy, Chris, Kanatsoulis, Charilaos, Koyejo, Sanmi

arXiv.org Artificial Intelligence 

Recent interest in building foundation models for KGs has highlighted a fundamental challenge: knowledge-graph data is relatively scarce. The best-known KGs are primarily human-labeled, created by pattern-matching, or extracted using early NLP techniques. While human-generated KGs are in short supply, automatically extracted KGs are of questionable quality. We present a solution to this data scarcity problem in the form of a text-to-KG generator (KGGen), a package that uses language models to create high-quality graphs from plaintext. Unlike other KG extractors, KGGen clusters related entities to reduce sparsity in extracted KGs. KGGen is available as a Python library ( pip install kg-gen), making it accessible to everyone. Along with KGGen, we release the first benchmark, Measure of of Information in Nodes and Edges (MINE), that tests an extractor's ability to produce a useful KG from plain text. We benchmark our new tool against existing extractors and demonstrate far superior performance. Knowledge graph (KG) applications and Graph Retrieval-Augmented Generation (RAG) systems are increasingly bottlenecked by the scarcity and incompleteness of available KGs. KGs consist of a set of subject-predicate-object triples, and have become a fundamental data structure for information retrieval (Schneider, 1973). Most real-world KGs, including Wikidata (contributors, 2024), DBpedia (Lehmann et al., 2015), and Y AGO (Suchanek et al., 2007), are far from complete, with many missing relations between entities (Shenoy et al., 2021).