Goto

Collaborating Authors

 Chen, Lizhe


Innate Reasoning is Not Enough: In-Context Learning Enhances Reasoning Large Language Models with Less Overthinking

arXiv.org Artificial Intelligence

Recent advances in Large Language Models (LLMs) have introduced Reasoning Large Language Models (RLLMs), which employ extended thinking processes with reflection and self-correction capabilities, demonstrating the effectiveness of test-time scaling. RLLMs exhibit innate Chain-of-Thought (CoT) reasoning capability obtained from training, leading to a natural question: "Is CoT prompting, a popular In-Context Learning (ICL) method for chat LLMs, necessary to enhance the reasoning capability of RLLMs?" In this work, we present the first comprehensive analysis of the impacts of Zero-shot CoT and Few-shot CoT on RLLMs across mathematical reasoning tasks. We examine models ranging from 1.5B to 32B parameters, finding that contrary to concerns, CoT prompting significantly enhances RLLMs' performance in most scenarios. Our results reveal distinct patterns: large-capacity models show minimal improvement on simple tasks but substantial gains on complex problems, while smaller models exhibit the opposite behavior. Further analysis demonstrates that CoT prompting effectively controls the distribution of the numbers of thinking tokens and reasoning steps, reducing excessive reflections by approximately 90% in some cases. Moreover, attention logits analysis reveals the RLLMs' overfitting to reflection-related words, which is mitigated by external CoT guidance. Notably, our experiments indicate that for RLLMs, one-shot CoT consistently yields superior performance compared to Few-shot CoT approaches. Our findings provide important insights for optimizing RLLMs' performance through appropriate prompting strategies.


Sequential Ordering in Textual Descriptions: Impact on Spatial Perception Abilities of Large Language Models

arXiv.org Artificial Intelligence

In recent years, Large Language Models have reached state-of-the-art performance across multiple domains. However, the progress in the field of graph reasoning remains limited. Our work delves into this gap by thoroughly investigating graph reasoning with LLM. In this work, we reveal the impact of text sequence on LLM spatial understanding, finding that graph-descriptive text sequences significantly affect LLM reasoning performance on graphs. By altering the graph-descriptive text sequences, we enhance the performance of LLM from 42.22\% to 70\%. Furthermore, we evaluate the relationship between LLM performance and graph size, discovering that the reasoning performance of LLM does not monotonically decrease with the increase in graph size. Conclusively, we introduce the Scaled Graph Reasoning benchmark for assessing LLM performance across varied graph sizes.