PersonaMath: Enhancing Math Reasoning through Persona-Driven Data Augmentation

Luo, Jing, Luo, Run, Chen, Longze, Zhu, Liang, Ao, Chang, Li, Jiaming, Chen, Yukun, Cheng, Xin, Yang, Wen, Su, Jiayuan, Li, Chengming, Yang, Min

arXiv.org Artificial Intelligence 

While closed-source Large Language Models (LLMs) demonstrate strong mathematical problem-solving abilities, open-source models continue to struggle with such tasks. To bridge this gap, we propose a data augmentation approach and introduce PersonaMathQA, a dataset derived from MATH and GSM8K, on which we train the PersonaMath models. Our approach consists of two stages: the first stage is learning from Persona Diversification, and the second stage is learning from Reflection. In the first stage, we regenerate detailed chain-of-thought (CoT) solutions as instructions using a closed-source LLM and introduce a novel personadriven data augmentation technique to enhance the dataset's quantity and diversity. In the second stage, we incorporate reflection to fully leverage more challenging and valuable questions. Evaluation of our PersonaMath models on MATH and GSM8K reveals that the PersonaMath-7B model (based on LLaMA-2-7B) achieves an accuracy of 24.2% on MATH and 68.7% on GSM8K, surpassing all baseline methods and achieving state-of-the-art performance. Notably, our dataset contains only 70.3K data points--merely 17.8% of MetaMathQA and 27% of MathInstruct--yet our model outperforms these baselines, demonstrating the high quality and diversity of our dataset, which enables more efficient model training. "There are a thousand Hamlets in a thousand people's eyes" Among these tasks, solving math problems stands out as particularly challenging due to its complexity and the requirement for multi-step reasoning to reach a solution. While some closed-source models, such as GPT-4o (OpenAI, 2024a), Claude 3.5 Sonnet (Anthropic, 2024), and Gemini 1.5 Pro (Reid et al., 2024), have demonstrated strong math-solving capabilities, current open-source models (e.g., LLaMA (Touvron et al., 2023; Dubey et al., 2024)) continue to struggle in this area. Therefore, enhancing the math problem-solving abilities of open-source models is a prominent desiderata. A widely adopted and effective approach for improving the math-solving capabilities of open-source models is fine-tuning, owing to the accessibility of their weights (Yuan et al., 2023; Yue et al., 2023; The method consists of two stages: Stage 1 (top) and Stage 2 (bottom). Stage 1 focuses on using closed-source LLMs to automatically generate detailed CoT solutions and apply our persona-driven rewriting method to rephrase the questions.