Son, Seongho
Robust Multi-Objective Controlled Decoding of Large Language Models
Son, Seongho, Bankes, William, Yoon, Sangwoong, Ramesh, Shyam Sundhar, Tang, Xiaohang, Bogunovic, Ilija
Large Language Models (LLMs) require alignment to become useful and safe conversational agents [Rafailov et al., 2023, Azar et al., 2023, Hong et al., 2024, Ethayarajh et al., 2024, Wu et al., 2024]. However, human preferences are diverse and nuanced, leading recent work to frame alignment as a multi-objective problem [Zhao et al., 2023, Shi et al., 2024] over a variety of desirable attributes and alignment objectives, for example, helpfulness, safety, honesty, and conciseness. Test time alignment [Mudgal et al., 2023] enables flexible control over the importance of different objectives at inference time without expensive retraining. This is a useful property as the alignment of an LLM can be varied to address a specific task, prompt, or interaction with a variety of users with diverse preferences [Sorensen et al., 2024b]. Existing methods for multi-objective alignment often formalize this problem through a weight vector that characterizes the relative importance of the objectives at deployment [Shi et al., 2024, Wang et al., 2024b,a, Rame et al., 2024]. In practice, the correct weighting of objectives is often unknown, leading to models that over-optimize specific alignment goals whilst under-prioritizing others. To address this problem, recent work has proposed several solutions, including treating weights as hyperparameters [Shi et al., 2024], learning specific weightings for different groups [Zhao et al.,
Game-Theoretic Regularized Self-Play Alignment of Large Language Models
Tang, Xiaohang, Yoon, Sangwoong, Son, Seongho, Yuan, Huizhuo, Gu, Quanquan, Bogunovic, Ilija
Self-play alignment algorithms have been developed as effective methods for fine-tuning large language models (LLMs), formulating preference optimization as a two-player game. However, the regularization with respect to the reference policy, which is crucial for mitigating over-optimization, has been insufficiently investigated in self-play alignment. In this paper, we show that our regularization method can improve the unregularized self-play significantly. To study the impact of different regularizations in self-play alignment, we propose Regularized Self-Play Policy Optimization (RSPO). This generalized framework regularizes the self-play by simply adding a chosen regularization term into the loss while maintaining provable last-iterate convergence to the Nash Equilibrium of the corresponding regularized game. Surprisingly, empirical evaluations using the Mistral-7B-Instruct base model reveal that forward KL divergence regularization reduces response length in RSPO, whereas reverse KL divergence markedly improves raw win rates. RSPO with a linear combination of forward and reverse KL divergence regularization substantially increases the length-controlled win rate in AlpacaEval-2, elevating the unregularized self-play alignment method (SPPO) from $28.53\%$ to $35.44\%$. Finally, we show that RSPO also improves the response diversity.
Creating Pro-Level AI for Real-Time Fighting Game with Deep Reinforcement Learning
Oh, Inseok, Rho, Seungeun, Moon, Sangbin, Son, Seongho, Lee, Hyoil, Chung, Jinyun
Reinforcement learning combined with deep neural networks has performed remarkably well in many genres of game recently. It surpassed human-level performance in fixed game environments and turn-based two player board games. However, no research has ever shown a result that surpassed human level in modern complex fighting games, to the best of our knowledge. This is due to the inherent difficulties of modern fighting games, including vast action spaces, real-time constraints, and performance generalizations required for various opponents. We overcame these challenges and made 1v1 battle AI agents for the commercial game, "Blade & Soul". The trained agents competed against five professional gamers and achieved 62% of win rate.This paper presents a practical reinforcement learning method including a novel self-play curriculum and data skipping techniques. Through the curriculum, three different styles of agents are created by reward shaping, and are trained against each other for robust performance. Additionally, this paper suggests data skipping techniques which increased data efficiency and facilitated explorations in vast spaces.