Goto

Collaborating Authors

 inequality follow



Noise-Adaptive Thompson Sampling for Linear Contextual Bandits

Neural Information Processing Systems

Linear contextual bandits represent a fundamental class of models with numerous real-world applications, and it is critical to developing algorithms that can effectively manage noise with unknown variance, ensuring provable guarantees for both worst-case constant-variance noise and deterministic reward scenarios.



Efficient Discrepancy Testing for Learning with Distribution Shift Gautam Chandrasekaran UT Austin Adam R. Klivans UT Austin Vasilis Kontonis UT Austin Konstantinos Stavropoulos

Neural Information Processing Systems

Our approach generalizes and improves all prior work on TDS learning: (1) we obtain universal learners that succeed simultaneously for large classes of test distributions, (2) achieve near-optimal error rates, and (3) give exponential improvements for constant depth circuits.