RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation
Wang, Zihao, Liu, Anji, Lin, Haowei, Li, Jiaqi, Ma, Xiaojian, Liang, Yitao
–arXiv.org Artificial Intelligence
We explore how iterative revising a chain of thoughts with the help of information retrieval significantly improves large language models' reasoning and generation ability in long-horizon generation tasks, while hugely mitigating hallucination. In particular, the proposed method -- *retrieval-augmented thoughts* (RAT) -- revises each thought step one by one with retrieved information relevant to the task query, the current and the past thought steps, after the initial zero-shot CoT is generated. Applying RAT to GPT-3.5, GPT-4, and CodeLLaMA-7b substantially improves their performances on various long-horizon generation tasks; on average of relatively increasing rating scores by 13.63% on code generation, 16.96% on mathematical reasoning, 19.2% on creative writing, and 42.78% on embodied task planning. The demo page can be found at https://craftjarvis.github.io/RAT
arXiv.org Artificial Intelligence
Mar-8-2024
- Country:
- Asia (0.67)
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.14)
- North America > United States
- California (0.14)
- Pennsylvania (0.14)
- Genre:
- Overview (0.46)
- Research Report (0.50)
- Industry:
- Government
- Military (0.46)
- Regional Government (0.46)
- Law > Civil Rights & Constitutional Law (0.46)
- Materials > Metals & Mining (0.69)
- Government
- Technology: