Towards Knowledge-Grounded Natural Language Understanding and Generation

Whitehouse, Chenxi

arXiv.org Artificial Intelligence 

This thesis investigates how natural language understanding and generation with transformer models can benefit from grounding the models with knowledge representations. Currently, the most prevailing paradigm for training language models is through pre-training on abundant raw text data and fine-tuning on downstream tasks. Although language models continue to advance, especially the recent trend of Large Language Models (LLMs) such as ChatGPT, there seem to be limits to what can be achieved with text data alone and it is desirable to study the impact of applying and integrating rich forms of knowledge representation to improve model performance. The most widely used form of knowledge for language modelling is structured knowledge in the form of triples consisting of entities and their relationships, often in English. This thesis explores beyond this conventional approach and aims to address several key questions: Can knowledge of entities extend its benefits beyond entity-centric tasks such as entity linking? How can we faithfully and effectively extract such structured knowledge from raw text, especially noisy web text? How do other types of knowledge, beyond structured knowledge, contribute to improving NLP tasks?

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found