WizardArena: Post-training Large Language Models via Simulated Offline Chatbot Arena Haipeng Luo 1 Qingfeng Sun 2 Can Xu2 Pu Zhao 2

Neural Information Processing Systems

Recent work demonstrates that, post-training large language models with opendomain instruction following data have achieved colossal success. Simultaneously, human Chatbot Arena has emerged as one of the most reasonable benchmarks for model evaluation and developmental guidance. However, the processes of manually curating high-quality training data and utilizing online human evaluation platforms are both expensive and limited. To mitigate the manual and temporal costs associated with post-training, this paper introduces a Simulated Chatbot Arena named WizardArena, which is fully based on and powered by open-source LLMs. For evaluation scenario, WizardArena can efficiently predict accurate performance rankings among different models based on offline test set. For the training scenario, we propose Arena Learning, an innovative offline strategy that simulates iterative arena battles among various state-of-the-art models on a large scale of instruction data using AI-driven annotations to evaluate and leverage battle results, thus continuously enhancing the weaknesses of the target model through both supervised fine-tuning and reinforcement learning. Experimental results demonstrate that our WizardArena aligns closely with the online human arena rankings, and our models, trained on extensive offline battle data through Arena Learning, demonstrate marked improvements in performance across the SFT, DPO, and PPO stages.



Cross-channel Communication Networks

Neural Information Processing Systems

While a lot of progress has been made by making networks deeper, filters at each layer independently generate responses given the input and do not communicate with each other. In this paper, we introduce a novel network unit called Cross-channel Communication (C3) block, a simple yet effective module to encourage the communication across filters within the same layer. The C3 block enables filters to exchange information through a micro neural network, which consists of a feature encoder, a message passer, and a feature decoder, before sending the information to the next layer. With C3 block, each channel response is modulated by accounting for the responses at other channels. Extensive experiments on multiple vision tasks show that our proposed block brings improvements for different CNN architectures, and learns more diverse and complementary representations.


d840cc5d906c3e9c84374c8919d2074e-AuthorFeedback.pdf

Neural Information Processing Systems

We thank the reviewers for the comments. All reviewers think the paper is clearly written and easy to read. We address reviewers' concerns below. We will include these statistics in the paper. All these suggest that the improvement is not simply due to the increased model size.


Stratified Prediction-Powered Inference for Hybrid Language Model Evaluation Adam Fisch, Joshua Maynez, R. Alex Hofer

Neural Information Processing Systems

Prediction-powered inference (PPI) is a method that improves statistical estimates based on limited human-labeled data. PPI achieves this by combining small amounts of human-labeled data with larger amounts of data labeled by a reasonably accurate--but potentially biased--automatic system, in a way that results in tighter confidence intervals for certain parameters of interest (e.g., the mean performance of a language model). In this paper, we propose a method called Stratified Prediction-Powered Inference (StratPPI), in which we show that the basic PPI estimates can be considerably improved by employing simple data stratification strategies. Without making any assumptions on the underlying automatic labeling system or data distribution, we derive an algorithm for computing provably valid confidence intervals for population parameters (such as averages) that is based on stratified sampling. In particular, we show both theoretically and empirically that, with appropriate choices of stratification and sample allocation, our approach can provide substantially tighter confidence intervals than unstratified approaches. Specifically, StratPPI is expected to improve in cases where the performance of the autorater varies across different conditional distributions of the target data.


Non-Euclidean Mixture Model for Social Network Embedding

Neural Information Processing Systems

It is largely agreed that social network links are formed due to either homophily or social influence. Inspired by this, we aim at understanding the generation of links via providing a novel embedding-based graph formation model. Different from existing graph representation learning, where link generation probabilities are defined as a simple function of the corresponding node embeddings, we model the link generation as a mixture model of the two factors. In addition, we model the homophily factor in spherical space and the influence factor in hyperbolic space to accommodate the fact that (1) homophily results in cycles and (2) influence results in hierarchies in networks. We also design a special projection to align these two spaces. We call this model Non-Euclidean Mixture Model, i.e., NMM.



Unsupervised learning of object structure and dynamics from videos

Neural Information Processing Systems

Extracting and predicting object structure and dynamics from videos without supervision is a major challenge in machine learning. To address this challenge, we adopt a keypoint-based image representation and learn a stochastic dynamics model of the keypoints. Future frames are reconstructed from the keypoints and a reference frame. By modeling dynamics in the keypoint coordinate space, we achieve stable learning and avoid compounding of errors in pixel space. Our method improves upon unstructured representations both for pixel-level video prediction and for downstream tasks requiring object-level understanding of motion dynamics. We evaluate our model on diverse datasets: a multi-agent sports dataset, the Human3.6M


R1, R6: Additional analyses/ablations for L

Neural Information Processing Systems

We thank the reviewers for their thoughtful comments and suggestions. Below, we address the reviewers' comments individually. We will add these analyses to the main text. Keypoints can indeed "jump" between frames, but we show in a new analysis (Fig. D) that the VRNN partially smooths over such jumps: We displaced the location of one keypoint by 0.5 image width in the Jumping thus seems to be a minor issue. R1: What is the size of the feature vector in CNN-VRNN?


Operator World Models for Reinforcement Learning

Neural Information Processing Systems

Policy Mirror Descent (PMD) is a powerful and theoretically sound methodology for sequential decision-making. However, it is not directly applicable to Reinforcement Learning (RL) due to the inaccessibility of explicit action-value functions. We address this challenge by introducing a novel approach based on learning a world model of the environment using conditional mean embeddings. Leveraging tools from operator theory we derive a closed-form expression of the action-value function in terms of the world model via simple matrix operations. Combining these estimators with PMD leads to POWR, a new RL algorithm for which we prove convergence rates to the global optimum.