Goto

Collaborating Authors

 Bratislava






67496dfa96afddab795530cc7c69b57a-Supplemental-Conference.pdf

Neural Information Processing Systems

Theoptimalbaseline, however, israrelyusedinpractice (Sutton & Barto (2018); foran exception, see (Peters & Schaal, 2008)). Equation (1) thentakesthefollowingform: r E R(x)= E (R(x) B)r log (x).


NaturalCounterfactualsWithNecessaryBacktracking

Neural Information Processing Systems

Ourmethodologyincorporates a certain amount of backtracking when needed, allowing changes in causally preceding variables tominimize deviations from realistic scenarios. Specifically, we introduce a novel optimization framework that permits but also controls the extent of backtracking with a "naturalness" criterion. Empirical experiments demonstrate the effectiveness of our method.


Cost-Sensitive Conformal Training with Provably Controllable Learning Bounds

Jia, Xuesong, Shi, Yuanjie, Liu, Ziquan, Xu, Yi, Yan, Yan

arXiv.org Machine Learning

Conformal prediction (CP) is a general framework to quantify the predictive uncertainty of machine learning models that uses a set prediction to include the true label with a valid probability. To align the uncertainty measured by CP, confor-mal training methods minimize the size of the prediction sets. A typical way is to use a surrogate indicator function, usually Sigmoid or Gaussian error function. However, these surrogate functions do not have a uniform error bound to the indicator function, leading to uncontrollable learning bounds. In this paper, we propose a simple cost-sensitive conformal training algorithm that does not rely on the indicator approximation mechanism. Specifically, we theoretically show that minimizing the expected size of prediction sets is upper bounded by the expected rank of true labels. To this end, we develop a rank weighting strategy that assigns the weight using the rank of true label on each data sample. Our analysis provably demonstrates the tightness between the proposed weighted objective and the expected size of conformal prediction sets. Extensive experiments verify the validity of our theoretical insights, and superior empirical performance over other con-formal training in terms of predictive efficiency with 21.38% reduction for average prediction set size.


Penalty-based Methods for Simple Bilevel Optimization under Hölderian Error Bounds Pengyu Chen

Neural Information Processing Systems

This paper investigates simple bilevel optimization problems where we minimize an upper-level objective over the optimal solution set of a convex lower-level objective. Existing methods for such problems either only guarantee asymptotic convergence, have slow sublinear rates, or require strong assumptions. To address these challenges, we propose a penalization framework that delineates the relationship between approximate solutions of the original problem and its reformulated counterparts.