semiparametric efficiency
- North America > United States > California > Alameda County > Berkeley (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Africa > Tanzania (0.04)
- Africa > Sub-Saharan Africa (0.04)
- Research Report > Strength High (1.00)
- Research Report > Experimental Study (1.00)
Evaluating LLMs When They Do Not Know the Answer: Statistical Evaluation of Mathematical Reasoning via Comparative Signals
Dong, Zihan, Zhang, Zhixian, Zhou, Yang, Jin, Can, Wu, Ruijia, Zhang, Linjun
Evaluating mathematical reasoning in LLMs is constrained by limited benchmark sizes and inherent model stochasticity, yielding high-variance accuracy estimates and unstable rankings across platforms. On difficult problems, an LLM may fail to produce a correct final answer, yet still provide reliable pairwise comparison signals indicating which of two candidate solutions is better. We leverage this observation to design a statistically efficient evaluation framework that combines standard labeled outcomes with pairwise comparison signals obtained by having models judge auxiliary reasoning chains. Treating these comparison signals as control variates, we develop a semiparametric estimator based on the efficient influence function (EIF) for the setting where auxiliary reasoning chains are observed. This yields a one-step estimator that achieves the semiparametric efficiency bound, guarantees strict variance reduction over naive sample averaging, and admits asymptotic normality for principled uncertainty quantification. Across simulations, our one-step estimator substantially improves ranking accuracy, with gains increasing as model output noise grows. Experiments on GPQA Diamond, AIME 2025, and GSM8K further demonstrate more precise performance estimation and more reliable model rankings, especially in small-sample regimes where conventional evaluation is pretty unstable.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
Beyond Demand Estimation: Consumer Surplus Evaluation via Cumulative Propensity Weights
Bian, Zeyu, Biggs, Max, Gao, Ruijiang, Qi, Zhengling
This paper develops a practical framework for using observational data to audit the consumer surplus effects of AI-driven decisions, specifically in targeted pricing and algorithmic lending. Traditional approaches first estimate demand functions and then integrate to compute consumer surplus, but these methods can be challenging to implement in practice due to model misspecification in parametric demand forms and the large data requirements and slow convergence of flexible nonparametric or machine learning approaches. Instead, we exploit the randomness inherent in modern algorithmic pricing, arising from the need to balance exploration and exploitation, and introduce an estimator that avoids explicit estimation and numerical integration of the demand function. Each observed purchase outcome at a randomized price is an unbiased estimate of demand and by carefully reweighting purchase outcomes using novel cumulative propensity weights (CPW), we are able to reconstruct the integral. Building on this idea, we introduce a doubly robust variant named the augmented cumulative propensity weighting (ACPW) estimator that only requires one of either the demand model or the historical pricing policy distribution to be correctly specified. Furthermore, this approach facilitates the use of flexible machine learning methods for estimating consumer surplus, since it achieves fast convergence rates by incorporating an estimate of demand, even when the machine learning estimate has slower convergence rates. Neither of these estimators is a standard application of off-policy evaluation techniques as the target estimand, consumer surplus, is unobserved. To address fairness, we extend this framework to an inequality-aware surplus measure, allowing regulators and firms to quantify the profit-equity trade-off. Finally, we validate our methods through comprehensive numerical studies.
- North America > United States > California (0.14)
- North America > United States > Virginia (0.04)
- North America > United States > Texas > Reagan County (0.04)
- (2 more...)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Banking & Finance (1.00)
- (2 more...)
- North America > United States > California > Alameda County > Berkeley (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Africa > Tanzania (0.04)
- Africa > Sub-Saharan Africa (0.04)
- Research Report > Strength High (1.00)
- Research Report > Experimental Study (1.00)
Beyond the Average: Distributional Causal Inference under Imperfect Compliance
Byambadalai, Undral, Hirata, Tomu, Oka, Tatsushi, Yasui, Shota
We study the estimation of distributional treatment effects in randomized experiments with imperfect compliance. When participants do not adhere to their assigned treatments, we leverage treatment assignment as an instrumental variable to identify the local distributional treatment effect-the difference in outcome distributions between treatment and control groups for the subpopulation of compliers. We propose a regression-adjusted estimator based on a distribution regression framework with Neyman-orthogonal moment conditions, enabling robustness and flexibility with high-dimensional covariates. Our approach accommodates continuous, discrete, and mixed discrete-continuous outcomes, and applies under a broad class of covariate-adaptive randomization schemes, including stratified block designs and simple random sampling. We derive the estimator's asymptotic distribution and show that it achieves the semiparametric efficiency bound. Simulation results demonstrate favorable finite-sample performance, and we demonstrate the method's practical relevance in an application to the Oregon Health Insurance Experiment.
- North America > United States > Oregon (0.25)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Research Report > Strength High (1.00)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Characterization of Efficient Influence Function for Off-Policy Evaluation Under Optimal Policies
Reinforcement learning (RL) focusing on developing optimal policies for sequential decision-making to maximize long-term rewards, (Sutton & Barto, 2018) have been serving as more and more important frontier in various fields. A critical component of RL is off-policy evaluation (OPE), which estimates the mean reward of a policy, termed the evaluation policy, using data collected under another policy, known as the behavior policy. OPE is essential in offline RL, where only historical datasets are available, precluding new experiments (Luedtke & V an Der Laan, 2016; Agarwal et al., 2019; Uehara et al., 2022). Recent years have witnessed substantial progress in developing statistically efficient OPE methods, with various approaches demonstrating semiparametric efficiency under different model settings (Jiang & Li, 2016; Kallus & Uehara, 2020; Shi et al., 2021). However, all of these existing analyses focus on scenarios where the evaluation policy is fixed and predetermined. A more challenging yet practical scenario arises when the evaluation policy itself is estimated from data, particularly when this policy is designed to be optimal with respect to some criterion. In this context, the statistical properties of OPE become more complex due to the additional estimation uncertainty introduced by the policy optimization process. In contrast, in the causal inference literature, such phenomena have been studied extensively in the optimal treatment regime literature (Laber et al., 2014; Kosorok & Laber, 2019; Athey & Wager, 2021). These works have established important results regarding the estimation of value functions under optimal treatment rules, but their direct application to the sequential decision-making context of RL presents additional challenges due to the temporal dependencies and potentially infinite horizons involved.
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > United States > California > San Diego County > La Jolla (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Efficient Adaptive Experimentation with Non-Compliance
Oprescu, Miruna, Cho, Brian M, Kallus, Nathan
We study the problem of estimating the average treatment effect (ATE) in adaptive experiments where treatment can only be encouraged--rather than directly assigned--via a binary instrumental variable. Building on semiparametric efficiency theory, we derive the efficiency bound for ATE estimation under arbitrary, history-dependent instrument-assignment policies, and show it is minimized by a variance-aware allocation rule that balances outcome noise and compliance variability. Leveraging this insight, we introduce AMRIV--an \textbf{A}daptive, \textbf{M}ultiply-\textbf{R}obust estimator for \textbf{I}nstrumental-\textbf{V}ariable settings with variance-optimal assignment. AMRIV pairs (i) an online policy that adaptively approximates the optimal allocation with (ii) a sequential, influence-function-based estimator that attains the semiparametric efficiency bound while retaining multiply-robust consistency. We establish asymptotic normality, explicit convergence rates, and anytime-valid asymptotic confidence sequences that enable sequential inference. Finally, we demonstrate the practical effectiveness of our approach through empirical studies, showing that adaptive instrument assignment, when combined with the AMRIV estimator, yields improved efficiency and robustness compared to existing baselines.
- Research Report > Experimental Study (1.00)
- Research Report > Strength High (0.92)
- Research Report > New Finding (0.67)
- Information Technology > Data Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.45)
Optimal Policy Adaptation under Covariate Shift
Liu, Xueqing, Yang, Qinwei, Tian, Zhaoqing, Guo, Ruocheng, Wu, Peng
Transfer learning of prediction models has been extensively studied, while the corresponding policy learning approaches are rarely discussed. In this paper, we propose principled approaches for learning the optimal policy in the target domain by leveraging two datasets: one with full information from the source domain and the other from the target domain with only covariates. First, under the setting of covariate shift, we formulate the problem from a perspective of causality and present the identifiability assumptions for the reward induced by a given policy. Then, we derive the efficient influence function and the semiparametric efficiency bound for the reward. Based on this, we construct a doubly robust and semiparametric efficient estimator for the reward and then learn the optimal policy by optimizing the estimated reward. Moreover, we theoretically analyze the bias and the generalization error bound for the learned policy. Furthermore, in the presence of both covariate and concept shifts, we propose a novel sensitivity analysis method to evaluate the robustness of the proposed policy learning approach. Extensive experiments demonstrate that the approach not only estimates the reward more accurately but also yields a policy that closely approximates the theoretically optimal policy.
- North America > United States > New Jersey (0.04)
- North America > United States > Florida > Palm Beach County > Boca Raton (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Beijing > Beijing (0.04)
Policy Learning for Balancing Short-Term and Long-Term Rewards
Wu, Peng, Shen, Ziyu, Xie, Feng, Wang, Zhongyao, Liu, Chunchen, Zeng, Yan
Empirical researchers and decision-makers spanning various domains frequently seek profound insights into the long-term impacts of interventions. While the significance of long-term outcomes is undeniable, an overemphasis on them may inadvertently overshadow short-term gains. Motivated by this, this paper formalizes a new framework for learning the optimal policy that effectively balances both long-term and short-term rewards, where some long-term outcomes are allowed to be missing. In particular, we first present the identifiability of both rewards under mild assumptions. Next, we deduce the semiparametric efficiency bounds, along with the consistency and asymptotic normality of their estimators. We also reveal that short-term outcomes, if associated, contribute to improving the estimator of the long-term reward. Based on the proposed estimators, we develop a principled policy learning approach and further derive the convergence rates of regret and estimation errors associated with the learned policy. Extensive experiments are conducted to validate the effectiveness of the proposed method, demonstrating its practical applicability.
- Europe > Austria > Vienna (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.92)
- Health & Medicine > Therapeutic Area > Endocrinology > Diabetes (0.67)
Active Adaptive Experimental Design for Treatment Effect Estimation with Covariate Choices
Kato, Masahiro, Oga, Akihiro, Komatsubara, Wataru, Inokuchi, Ryo
This study designs an adaptive experiment for efficiently estimating average treatment effect (ATEs). We consider an adaptive experiment where an experimenter sequentially samples an experimental unit from a covariate density decided by the experimenter and assigns a treatment. After assigning a treatment, the experimenter observes the corresponding outcome immediately. At the end of the experiment, the experimenter estimates an ATE using gathered samples. The objective of the experimenter is to estimate the ATE with a smaller asymptotic variance. Existing studies have designed experiments that adaptively optimize the propensity score (treatment-assignment probability). As a generalization of such an approach, we propose a framework under which an experimenter optimizes the covariate density, as well as the propensity score, and find that optimizing both covariate density and propensity score reduces the asymptotic variance more than optimizing only the propensity score. Based on this idea, in each round of our experiment, the experimenter optimizes the covariate density and propensity score based on past observations. To design an adaptive experiment, we first derive the efficient covariate density and propensity score that minimizes the semiparametric efficiency bound, a lower bound for the asymptotic variance given a fixed covariate density and a fixed propensity score. Next, we design an adaptive experiment using the efficient covariate density and propensity score sequentially estimated during the experiment. Lastly, we propose an ATE estimator whose asymptotic variance aligns with the minimized semiparametric efficiency bound.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)