On the Transformations across Reward Model, Parameter Update, and In-Context Prompt

Cai, Deng, Li, Huayang, Fu, Tingchen, Li, Siheng, Xu, Weiwen, Li, Shuaiyi, Cao, Bowen, Zhang, Zhisong, Huang, Xinting, Cui, Leyang, Wang, Yan, Liu, Lemao, Watanabe, Taro, Shi, Shuming

arXiv.org Artificial Intelligence 

Despite the general capabilities of pre-trained large language models (LLMs), they still need further adaptation to better serve practical applications. In this paper, we demonstrate the interchangeability of three popular and distinct adaptation tools: parameter updating, reward modeling, and in-context prompting. This interchangeability establishes a triangular framework with six transformation directions, each of which facilitates a variety of applications. Our work offers a holistic view that unifies numerous existing studies and suggests potential research directions. We envision our work as a useful roadmap for future research on LLMs.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found