Let's Be Self-generated via Step by Step: A Curriculum Learning Approach to Automated Reasoning with Large Language Models
Luo, Kangyang, Ding, Zichen, Weng, Zhenmin, Qiao, Lingfeng, Zhao, Meng, Li, Xiang, Yin, Di, Shu, Jinlong
–arXiv.org Artificial Intelligence
Prior to our efforts, there has already been work striving towards this goal. For example, Self-ICL (Chen et al., 2023) begins by prompting the LLM to generate few-shot new, diverse, and creative proxy queries tailored to the target task, and then solves each of that independently using the ZS-CoT manner, which in turn yields proxy exemplars for prompting LLMs to engage in reasoning. Auto-ICL (Yang et al., 2023) operates similarly to Self-ICL, but it differs in that Auto-ICL instructs the LLM to produce proxy queries that have the same structure as the given query. Analogical Prompting (Yasunaga et al., 2023) draws on the cognitive process of solving new problems from relevant past experiences, i.e., inspired by analogical reasoning, which prompts the language model to self-generate relevant examples in context before embarking on the solution of a given query. Notably, the one-pass generation mode employed in Analogical Prompting necessitates that the LLM possesses robust capabilities for both following instructions and generating responses. We revisit the aforementioned approaches and discern that their efficacy hinges on guiding the LLM to recall experiences relevant to the given query. However, solely considering such experiences may lead to the generation of proxy queries that are as challenging as the given query, along with corresponding erroneous proxy solutions, potentially misleading the solution of the original given query.
arXiv.org Artificial Intelligence
Oct-29-2024
- Country:
- Asia > China (0.14)
- North America > Canada (0.14)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Education (0.92)
- Health & Medicine > Therapeutic Area (0.67)
- Leisure & Entertainment > Sports (0.50)
- Transportation (0.68)
- Technology: