Cai, Zhongze
Towards Better Understanding of In-Context Learning Ability from In-Context Uncertainty Quantification
Liu, Shang, Cai, Zhongze, Chen, Guanting, Li, Xiaocheng
Predicting simple function classes has been widely used as a testbed for developing theory and understanding of the trained Transformer's in-context learning (ICL) ability. In this paper, we revisit the training of Transformers on linear regression tasks, and different from all the existing literature, we consider a bi-objective prediction task of predicting both the conditional expectation $\mathbb{E}[Y|X]$ and the conditional variance Var$(Y|X)$. This additional uncertainty quantification objective provides a handle to (i) better design out-of-distribution experiments to distinguish ICL from in-weight learning (IWL) and (ii) make a better separation between the algorithms with and without using the prior information of the training distribution. Theoretically, we show that the trained Transformer reaches near Bayes-optimum, suggesting the usage of the information of the training distribution. Our method can be extended to other cases. Specifically, with the Transformer's context window $S$, we prove a generalization bound of $\tilde{\mathcal{O}}(\sqrt{\min\{S, T\}/(n T)})$ on $n$ tasks with sequences of length $T$, providing sharper analysis compared to previous results of $\tilde{\mathcal{O}}(\sqrt{1/n})$. Empirically, we illustrate that while the trained Transformer behaves as the Bayes-optimal solution as a natural consequence of supervised training in distribution, it does not necessarily perform a Bayesian inference when facing task shifts, in contrast to the \textit{equivalence} between these two proposed in many existing literature. We also demonstrate the trained Transformer's ICL ability over covariates shift and prompt-length shift and interpret them as a generalization over a meta distribution.
Towards Better Statistical Understanding of Watermarking LLMs
Cai, Zhongze, Liu, Shang, Wang, Hanzhao, Zhong, Huaiyang, Li, Xiaocheng
As the ability of large language models (LLMs) evolves rapidly, their applications have gradually touched every corner of our daily lives. However, these fast-developing tools raise concerns about the abuse of LLMs. The misuse of LLMs could harm human society in ways such as launching bots on social media, creating fake news and content, and cheating on writing school essays. The overwhelming synthetic data created by the LLMs rather than real humans is also dragging down the efforts to improve the LLMs themselves: the synthetic data pollutes the data pool and should be detected and removed to create a high-quality dataset before training (Radford et al., 2023). Numerous attempts have been made to make the detection possible which can mainly be classified into two categories: post hoc detection that does not modify the language model and the watermarking that changes the output to encode information in the content. Post hoc detection aims to train models that directly label the texts without monitoring the generation process. Although post hoc detections do not require access to modify the output of LLMs, they do make use of statistical features such as the internal activations of the LLMs. For example, when being inspected by another LLM, the statistical properties of machine-generated texts deviate from the human-generated ones in some aspects such as the distributions of token log-likelihoods (Gehrmann et al., 2019; Ippolito et al., 2019; Zellers et al., 2019; Solaiman et al., 2019; Tian, 2023; Mitchell et al., 2023). However, post hoc ways usually rely on the fundamental assumption that machine-generated texts statistically deviate from human-generated texts, which could be challenged in two ways.
Distribution-Free Model-Agnostic Regression Calibration via Nonparametric Methods
Liu, Shang, Cai, Zhongze, Li, Xiaocheng
In this paper, we consider the uncertainty quantification problem for regression models. Specifically, we consider an individual calibration objective for characterizing the quantiles of the prediction model. While such an objective is well-motivated from downstream tasks such as newsvendor cost, the existing methods have been largely heuristic and lack of statistical guarantee in terms of individual calibration. We show via simple examples that the existing methods focusing on population-level calibration guarantees such as average calibration or sharpness can lead to harmful and unexpected results. We propose simple nonparametric calibration methods that are agnostic of the underlying prediction model and enjoy both computational efficiency and statistical consistency. Our approach enables a better understanding of the possibility of individual calibration, and we establish matching upper and lower bounds for the calibration error of our proposed methods. Technically, our analysis combines the nonparametric analysis with a covering number argument for parametric analysis, which advances the existing theoretical analyses in the literature of nonparametric density estimation and quantile bandit problems. Importantly, the nonparametric perspective sheds new theoretical insights into regression calibration in terms of the curse of dimensionality and reconciles the existing results on the impossibility of individual calibration. To our knowledge, we make the first effort to reach both individual calibration and finite-sample guarantee with minimal assumptions in terms of conformal prediction. Numerical experiments show the advantage of such a simple approach under various metrics, and also under covariates shift. We hope our work provides a simple benchmark and a starting point of theoretical ground for future research on regression calibration.
A Neural Network Based Choice Model for Assortment Optimization
Wang, Hanzhao, Cai, Zhongze, Li, Xiaocheng, Talluri, Kalyan
Discrete-choice models are used in economics, marketing and revenue management to predict customer purchase probabilities, say as a function of prices and other features of the offered assortment. While they have been shown to be expressive, capturing customer heterogeneity and behaviour, they are also hard to estimate, often based on many unobservables like utilities; and moreover, they still fail to capture many salient features of customer behaviour. A natural question then, given their success in other contexts, is if neural networks can eliminate the necessity of carefully building a context-dependent customer behaviour model and hand-coding and tuning the estimation. It is unclear however how one would incorporate assortment effects into such a neural network, and also how one would optimize the assortment with such a black-box generative model of choice probabilities. In this paper we investigate first whether a single neural network architecture can predict purchase probabilities for datasets from various contexts and generated under various models and assumptions. Next, we develop an assortment optimization formulation that is solvable by off-the-shelf integer programming solvers. We compare against a variety of benchmark discrete-choice models on simulated as well as real-world datasets, developing training tricks along the way to make the neural network prediction and subsequent optimization robust and comparable in performance to the alternates.