Using Language Models For Knowledge Acquisition in Natural Language Reasoning Problems

Lin, Fangzhen, Shou, Ziyi, Chen, Chengcai

arXiv.org Artificial Intelligence 

For a natural language problem that requires some non-trivial reasoning to solve, there are at least two ways to do it using a large language model (LLM). One is to ask it to solve it directly. The other is to use it to extract the facts from the problem text and then use a theorem prover to solve it. In this note, we compare the two methods using ChatGPT and GPT4 on a series of logic word puzzles, and conclude that the latter is the right approach.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found