Learning Semantic Structure through First-Order-Logic Translation
Chaturvedi, Akshay, Asher, Nicholas
–arXiv.org Artificial Intelligence
In this paper, we study whether transformer-based language models can extract predicate argument structure from simple sentences. We firstly show that language models sometimes confuse which predicates apply to which objects. To mitigate this, we explore two tasks: question answering (Q/A), and first order logic (FOL) translation, and two regimes, prompting and finetuning. In FOL translation, we finetune several large language models on synthetic datasets designed to gauge their generalization abilities. For Q/A, we finetune encoder models like BERT and RoBERTa and use prompting for LLMs. The results show that FOL translation for LLMs is better suited to learn predicate argument structure.
arXiv.org Artificial Intelligence
Oct-4-2024
- Country:
- Europe > France (0.28)
- North America
- Mexico (0.28)
- United States > Minnesota
- Hennepin County > Minneapolis (0.14)
- Genre:
- Research Report > New Finding (0.48)
- Technology: