On the Transformations across Reward Model, Parameter Update, and In-Context Prompt
Cai, Deng, Li, Huayang, Fu, Tingchen, Li, Siheng, Xu, Weiwen, Li, Shuaiyi, Cao, Bowen, Zhang, Zhisong, Huang, Xinting, Cui, Leyang, Wang, Yan, Liu, Lemao, Watanabe, Taro, Shi, Shuming
–arXiv.org Artificial Intelligence
Despite the general capabilities of pre-trained large language models (LLMs), they still need further adaptation to better serve practical applications. In this paper, we demonstrate the interchangeability of three popular and distinct adaptation tools: parameter updating, reward modeling, and in-context prompting. This interchangeability establishes a triangular framework with six transformation directions, each of which facilitates a variety of applications. Our work offers a holistic view that unifies numerous existing studies and suggests potential research directions. We envision our work as a useful roadmap for future research on LLMs.
arXiv.org Artificial Intelligence
Jun-24-2024
- Country:
- Asia > Middle East
- UAE (0.14)
- Europe (1.00)
- North America > United States
- California (0.14)
- Pennsylvania (0.14)
- Texas (0.14)
- Asia > Middle East
- Genre:
- Overview (0.68)
- Research Report (0.63)
- Technology: