Evolutionary Pre-Prompt Optimization for Mathematical Reasoning
Videau, Mathurin, Leite, Alessandro, Schoenauer, Marc, Teytaud, Olivier
–arXiv.org Artificial Intelligence
However, despite their size and complexity, these models still face challenges in multi-step reasoning, particularly in tasks that require arithmetic, logic, and/or mathematical reasoning [Cobbe et al. 2021; Rae et al. 2021]. To address this limitation, recent works have focused on enhancing the reasoning abilities of LLMs. A significant advancement in this direction is the chain-of-thought (CoT) prompting method [Wei et al. 2022b]. This approach involves guiding LLMs to articulate intermediate reasoning steps in a manner akin to human thought processes, leading to more accurate and interpretable solutions. This method has shown substantial improvements on complex tasks, including mathematics and commonsense reasoning [Lu et al. 2022b; Suzgun et al. 2022; Wei et al. 2022b]. The advancement of the CoT prompting has opened new pathways in the design of effective CoT prompts [Fu et al. 2022; Jiang et al. 2023; Kojima et al. 2022; Zhou et al. 2022].
arXiv.org Artificial Intelligence
Dec-5-2024
- Country:
- Asia (0.14)
- North America > United States (0.14)
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Transportation (0.46)
- Technology: