JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for Multi-task Mathematical Problem Solving
Zhao, Wayne Xin, Zhou, Kun, Zhang, Beichen, Gong, Zheng, Chen, Zhipeng, Zhou, Yuanhang, Wen, Ji-Rong, Sha, Jing, Wang, Shijin, Liu, Cong, Hu, Guoping
–arXiv.org Artificial Intelligence
Although pre-trained language models~(PLMs) have recently advanced the research progress in mathematical reasoning, they are not specially designed as a capable multi-task solver, suffering from high cost for multi-task deployment (\eg a model copy for a task) and inferior performance on complex mathematical problems in practical applications. To address these issues, in this paper, we propose \textbf{JiuZhang~2.0}, a unified Chinese PLM specially for multi-task mathematical problem solving. Our idea is to maintain a moderate-sized model and employ the \emph{cross-task knowledge sharing} to improve the model capacity in a multi-task setting. Specially, we construct a Mixture-of-Experts~(MoE) architecture for modeling mathematical text, so as to capture the common mathematical knowledge across tasks. For optimizing the MoE architecture, we design \emph{multi-task continual pre-training} and \emph{multi-task fine-tuning} strategies for multi-task adaptation. These training strategies can effectively decompose the knowledge from the task data and establish the cross-task sharing via expert networks. In order to further improve the general capacity of solving different complex tasks, we leverage large language models~(LLMs) as complementary models to iteratively refine the generated solution by our PLM, via in-context learning. Extensive experiments have demonstrated the effectiveness of our model.
arXiv.org Artificial Intelligence
Jun-19-2023
- Country:
- Asia > China (0.47)
- North America > United States (0.48)
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Education (0.93)
- Technology: