Yoon, Sangwoong
Robust Multi-Objective Controlled Decoding of Large Language Models
Son, Seongho, Bankes, William, Yoon, Sangwoong, Ramesh, Shyam Sundhar, Tang, Xiaohang, Bogunovic, Ilija
Large Language Models (LLMs) require alignment to become useful and safe conversational agents [Rafailov et al., 2023, Azar et al., 2023, Hong et al., 2024, Ethayarajh et al., 2024, Wu et al., 2024]. However, human preferences are diverse and nuanced, leading recent work to frame alignment as a multi-objective problem [Zhao et al., 2023, Shi et al., 2024] over a variety of desirable attributes and alignment objectives, for example, helpfulness, safety, honesty, and conciseness. Test time alignment [Mudgal et al., 2023] enables flexible control over the importance of different objectives at inference time without expensive retraining. This is a useful property as the alignment of an LLM can be varied to address a specific task, prompt, or interaction with a variety of users with diverse preferences [Sorensen et al., 2024b]. Existing methods for multi-objective alignment often formalize this problem through a weight vector that characterizes the relative importance of the objectives at deployment [Shi et al., 2024, Wang et al., 2024b,a, Rame et al., 2024]. In practice, the correct weighting of objectives is often unknown, leading to models that over-optimize specific alignment goals whilst under-prioritizing others. To address this problem, recent work has proposed several solutions, including treating weights as hyperparameters [Shi et al., 2024], learning specific weightings for different groups [Zhao et al.,
This Is Your Doge, If It Please You: Exploring Deception and Robustness in Mixture of LLMs
Wolf, Lorenz, Yoon, Sangwoong, Bogunovic, Ilija
Mixture of large language model (LLMs) Agents (MoA) architectures achieve state-of-the-art performance on prominent benchmarks like AlpacaEval 2.0 by leveraging the collaboration of multiple LLMs at inference time. Despite these successes, an evaluation of the safety and reliability of MoA is missing. We present the first comprehensive study of MoA's robustness against deceptive LLM agents that deliberately provide misleading responses. We examine factors like the propagation of deceptive information, model size, and information availability, and uncover critical vulnerabilities. On AlpacaEval 2.0, the popular LLaMA 3.1-70B model achieves a length-controlled Win Rate (LC WR) of 49.2% when coupled with 3-layer MoA (6 LLM agents). However, we demonstrate that introducing only a $\textit{single}$ carefully-instructed deceptive agent into the MoA can reduce performance to 37.9%, effectively nullifying all MoA gains. On QuALITY, a multiple-choice comprehension task, the impact is also severe, with accuracy plummeting by a staggering 48.5%. Inspired in part by the historical Doge of Venice voting process, designed to minimize influence and deception, we propose a range of unsupervised defense mechanisms that recover most of the lost performance.
Value Gradient Sampler: Sampling as Sequential Decision Making
Yoon, Sangwoong, Hwang, Himchan, Jeong, Hyeokju, Shin, Dong Kyu, Park, Che-Sang, Kweon, Sehee, Park, Frank Chongwoo
We propose the Value Gradient Sampler (VGS), a trainable sampler based on the interpretation of sampling as discrete-time sequential decision-making. VGS generates samples from a given unnormalized density (i.e., energy) by drifting and diffusing randomly initialized particles. In VGS, finding the optimal drift is equivalent to solving an optimal control problem where the cost is the upper bound of the KL divergence between the target density and the samples. We employ value-based dynamic programming to solve this optimal control problem, which gives the gradient of the value function as the optimal drift vector. The connection to sequential decision making allows VGS to leverage extensively studied techniques in reinforcement learning, making VGS a fast, adaptive, and accurate sampler that achieves competitive results in various sampling benchmarks. Furthermore, VGS can replace MCMC in contrastive divergence training of energy-based models. We demonstrate the effectiveness of VGS in training accurate energy-based models in industrial anomaly detection applications.
Game-Theoretic Regularized Self-Play Alignment of Large Language Models
Tang, Xiaohang, Yoon, Sangwoong, Son, Seongho, Yuan, Huizhuo, Gu, Quanquan, Bogunovic, Ilija
Self-play alignment algorithms have been developed as effective methods for fine-tuning large language models (LLMs), formulating preference optimization as a two-player game. However, the regularization with respect to the reference policy, which is crucial for mitigating over-optimization, has been insufficiently investigated in self-play alignment. In this paper, we show that our regularization method can improve the unregularized self-play significantly. To study the impact of different regularizations in self-play alignment, we propose Regularized Self-Play Policy Optimization (RSPO). This generalized framework regularizes the self-play by simply adding a chosen regularization term into the loss while maintaining provable last-iterate convergence to the Nash Equilibrium of the corresponding regularized game. Surprisingly, empirical evaluations using the Mistral-7B-Instruct base model reveal that forward KL divergence regularization reduces response length in RSPO, whereas reverse KL divergence markedly improves raw win rates. RSPO with a linear combination of forward and reverse KL divergence regularization substantially increases the length-controlled win rate in AlpacaEval-2, elevating the unregularized self-play alignment method (SPPO) from $28.53\%$ to $35.44\%$. Finally, we show that RSPO also improves the response diversity.
Maximum Entropy Inverse Reinforcement Learning of Diffusion Models with Energy-Based Models
Yoon, Sangwoong, Hwang, Himchan, Kwon, Dohyun, Noh, Yung-Kyun, Park, Frank C.
We present a maximum entropy inverse reinforcement learning (IRL) approach for improving the sample quality of diffusion generative models, especially when the number of generation time steps is small. Similar to how IRL trains a policy based on the reward function learned from expert demonstrations, we train (or fine-tune) a diffusion model using the log probability density estimated from training data. Since we employ an energy-based model (EBM) to represent the log density, our approach boils down to the joint training of a diffusion model and an EBM. Our IRL formulation, named Diffusion by Maximum Entropy IRL (DxMI), is a minimax problem that reaches equilibrium when both models converge to the data distribution. The entropy maximization plays a key role in DxMI, facilitating the exploration of the diffusion model and ensuring the convergence of the EBM. We also propose Diffusion by Dynamic Programming (DxDP), a novel reinforcement learning algorithm for diffusion models, as a subroutine in DxMI. DxDP makes the diffusion model update in DxMI efficient by transforming the original problem into an optimal control formulation where value functions replace back-propagation in time. Our empirical studies show that diffusion models fine-tuned using DxMI can generate high-quality samples in as few as 4 and 10 steps. Additionally, DxMI enables the training of an EBM without MCMC, stabilizing EBM training dynamics and enhancing anomaly detection performance.
Generalized Contrastive Divergence: Joint Training of Energy-Based Model and Diffusion Model through Inverse Reinforcement Learning
Yoon, Sangwoong, Kwon, Dohyun, Hwang, Himchan, Noh, Yung-Kyun, Park, Frank C.
In GCD, the joint training of EBM and a diffusion model is formulated as a minimax problem, which reaches an equilibrium when both models converge to the data distribution. The minimax learning with GCD bears interesting equivalence to inverse reinforcement learning, where the energy corresponds to a negative reward, the diffusion model is a policy, and the real data is expert demonstrations. We present preliminary yet promising results showing that joint training is beneficial for both EBM and a diffusion model. GCD enables EBM training without MCMC while improving the sample quality of a diffusion model.
Variational Weighting for Kernel Density Ratios
Yoon, Sangwoong, Park, Frank C., Yun, Gunsu S, Kim, Iljung, Noh, Yung-Kyun
Kernel density estimation (KDE) is integral to a range of generative and discriminative tasks in machine learning. Drawing upon tools from the multidimensional calculus of variations, we derive an optimal weight function that reduces bias in standard kernel density estimates for density ratios, leading to improved estimates of prediction posteriors and information-theoretic measures. In the process, we shed light on some fundamental aspects of density estimation, particularly from the perspective of algorithms that employ KDEs as their main building blocks.
Energy-Based Models for Anomaly Detection: A Manifold Diffusion Recovery Approach
Yoon, Sangwoong, Jin, Young-Uk, Noh, Yung-Kyun, Park, Frank C.
We present a new method of training energy-based models (EBMs) for anomaly detection that leverages low-dimensional structures within data. The proposed algorithm, Manifold Projection-Diffusion Recovery (MPDR), first perturbs a data point along a low-dimensional manifold that approximates the training dataset. Then, EBM is trained to maximize the probability of recovering the original data. The training involves the generation of negative samples via MCMC, as in conventional EBM training, but from a different distribution concentrated near the manifold. The resulting near-manifold negative samples are highly informative, reflecting relevant modes of variation in data. An energy function of MPDR effectively learns accurate boundaries of the training data distribution and excels at detecting out-of-distribution samples. Experimental results show that MPDR exhibits strong performance across various anomaly detection tasks involving diverse data types, such as images, vectors, and acoustic signals.
Autoencoding Under Normalization Constraints
Yoon, Sangwoong, Noh, Yung-Kyun, Park, Frank Chongwoo
Likelihood is a standard estimate for outlier detection. The specific role of the normalization constraint is to ensure that the out-of-distribution (OOD) regime has a small likelihood when samples are learned using maximum likelihood. Because autoencoders do not possess such a process of normalization, they often fail to recognize outliers even when they are obviously OOD. We propose the Normalized Autoencoder (NAE), a normalized probabilistic model constructed from an autoencoder. The probability density of NAE is defined using the reconstruction error of an autoencoder, which is differently defined in the conventional energy-based model. In our model, normalization is enforced by suppressing the reconstruction of negative samples, significantly improving the outlier detection performance. Our experimental results confirm the efficacy of NAE, both in detecting outliers and in generating in-distribution samples.