Tang, Xiaohang
Robust Multi-Objective Controlled Decoding of Large Language Models
Son, Seongho, Bankes, William, Yoon, Sangwoong, Ramesh, Shyam Sundhar, Tang, Xiaohang, Bogunovic, Ilija
Large Language Models (LLMs) require alignment to become useful and safe conversational agents [Rafailov et al., 2023, Azar et al., 2023, Hong et al., 2024, Ethayarajh et al., 2024, Wu et al., 2024]. However, human preferences are diverse and nuanced, leading recent work to frame alignment as a multi-objective problem [Zhao et al., 2023, Shi et al., 2024] over a variety of desirable attributes and alignment objectives, for example, helpfulness, safety, honesty, and conciseness. Test time alignment [Mudgal et al., 2023] enables flexible control over the importance of different objectives at inference time without expensive retraining. This is a useful property as the alignment of an LLM can be varied to address a specific task, prompt, or interaction with a variety of users with diverse preferences [Sorensen et al., 2024b]. Existing methods for multi-objective alignment often formalize this problem through a weight vector that characterizes the relative importance of the objectives at deployment [Shi et al., 2024, Wang et al., 2024b,a, Rame et al., 2024]. In practice, the correct weighting of objectives is often unknown, leading to models that over-optimize specific alignment goals whilst under-prioritizing others. To address this problem, recent work has proposed several solutions, including treating weights as hyperparameters [Shi et al., 2024], learning specific weightings for different groups [Zhao et al.,
Game-Theoretic Regularized Self-Play Alignment of Large Language Models
Tang, Xiaohang, Yoon, Sangwoong, Son, Seongho, Yuan, Huizhuo, Gu, Quanquan, Bogunovic, Ilija
Self-play alignment algorithms have been developed as effective methods for fine-tuning large language models (LLMs), formulating preference optimization as a two-player game. However, the regularization with respect to the reference policy, which is crucial for mitigating over-optimization, has been insufficiently investigated in self-play alignment. In this paper, we show that our regularization method can improve the unregularized self-play significantly. To study the impact of different regularizations in self-play alignment, we propose Regularized Self-Play Policy Optimization (RSPO). This generalized framework regularizes the self-play by simply adding a chosen regularization term into the loss while maintaining provable last-iterate convergence to the Nash Equilibrium of the corresponding regularized game. Surprisingly, empirical evaluations using the Mistral-7B-Instruct base model reveal that forward KL divergence regularization reduces response length in RSPO, whereas reverse KL divergence markedly improves raw win rates. RSPO with a linear combination of forward and reverse KL divergence regularization substantially increases the length-controlled win rate in AlpacaEval-2, elevating the unregularized self-play alignment method (SPPO) from $28.53\%$ to $35.44\%$. Finally, we show that RSPO also improves the response diversity.
Adversarial Robust Decision Transformer: Enhancing Robustness of RvS via Minimax Returns-to-go
Tang, Xiaohang, Marques, Afonso, Kamalaruban, Parameswaran, Bogunovic, Ilija
Decision Transformer (DT), as one of the representative Reinforcement Learning via Supervised Learning (RvS) methods, has achieved strong performance in offline learning tasks by leveraging the powerful Transformer architecture for sequential decision-making. However, in adversarial environments, these methods can be non-robust, since the return is dependent on the strategies of both the decision-maker and adversary. Training a probabilistic model conditioned on observed return to predict action can fail to generalize, as the trajectories that achieve a return in the dataset might have done so due to a weak and suboptimal behavior adversary. To address this, we propose a worst-case-aware RvS algorithm, the Adversarial Robust Decision Transformer (ARDT), which learns and conditions the policy on in-sample minimax returns-to-go. ARDT aligns the target return with the worst-case return learned through minimax expectile regression, thereby enhancing robustness against powerful test-time adversaries. In experiments conducted on sequential games with full data coverage, ARDT can generate a maximin (Nash Equilibrium) strategy, the solution with the largest adversarial robustness. In large-scale sequential games and continuous adversarial RL environments with partial data coverage, ARDT demonstrates significantly superior robustness to powerful test-time adversaries and attains higher worst-case returns compared to contemporary DT methods.
Can Word Sense Distribution Detect Semantic Changes of Words?
Tang, Xiaohang, Zhou, Yi, Aida, Taichi, Sen, Procheta, Bollegala, Danushka
Semantic Change Detection (SCD) of words is an important task for various NLP applications that must make time-sensitive predictions. Some words are used over time in novel ways to express new meanings, and these new meanings establish themselves as novel senses of existing words. On the other hand, Word Sense Disambiguation (WSD) methods associate ambiguous words with sense ids, depending on the context in which they occur. Given this relationship between WSD and SCD, we explore the possibility of predicting whether a target word has its meaning changed between two corpora collected at different time steps, by comparing the distributions of senses of that word in each corpora. For this purpose, we use pretrained static sense embeddings to automatically annotate each occurrence of the target word in a corpus with a sense id. Next, we compute the distribution of sense ids of a target word in a given corpus. Finally, we use different divergence or distance measures to quantify the semantic change of the target word across the two given corpora. Our experimental results on SemEval 2020 Task 1 dataset show that word sense distributions can be accurately used to predict semantic changes of words in English, German, Swedish and Latin.
Learning Dynamic Contextualised Word Embeddings via Template-based Temporal Adaptation
Tang, Xiaohang, Zhou, Yi, Bollegala, Danushka
Dynamic contextualised word embeddings (DCWEs) represent the temporal semantic variations of words. We propose a method for learning DCWEs by time-adapting a pretrained Masked Language Model (MLM) using time-sensitive templates. Given two snapshots $C_1$ and $C_2$ of a corpus taken respectively at two distinct timestamps $T_1$ and $T_2$, we first propose an unsupervised method to select (a) \emph{pivot} terms related to both $C_1$ and $C_2$, and (b) \emph{anchor} terms that are associated with a specific pivot term in each individual snapshot. We then generate prompts by filling manually compiled templates using the extracted pivot and anchor terms. Moreover, we propose an automatic method to learn time-sensitive templates from $C_1$ and $C_2$, without requiring any human supervision. Next, we use the generated prompts to adapt a pretrained MLM to $T_2$ by fine-tuning using those prompts. Multiple experiments show that our proposed method reduces the perplexity of test sentences in $C_2$, outperforming the current state-of-the-art.
Average-Reward Reinforcement Learning with Trust Region Methods
Ma, Xiaoteng, Tang, Xiaohang, Xia, Li, Yang, Jun, Zhao, Qianchuan
Most of reinforcement learning algorithms optimize the discounted criterion which is beneficial to accelerate the convergence and reduce the variance of estimates. Although the discounted criterion is appropriate for certain tasks such as financial related problems, many engineering problems treat future rewards equally and prefer a long-run average criterion. In this paper, we study the reinforcement learning problem with the long-run average criterion. Firstly, we develop a unified trust region theory with discounted and average criteria. With the average criterion, a novel performance bound within the trust region is derived with the Perturbation Analysis (PA) theory. Secondly, we propose a practical algorithm named Average Policy Optimization (APO), which improves the value estimation with a novel technique named Average Value Constraint. To the best of our knowledge, our work is the first one to study the trust region approach with the average criterion and it complements the framework of reinforcement learning beyond the discounted criterion. Finally, experiments are conducted in the continuous control environment MuJoCo. In most tasks, APO performs better than the discounted PPO, which demonstrates the effectiveness of our approach.